Attention can reflect syntactic structure (if you let it)

Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

Standard

Attention can reflect syntactic structure (if you let it). / Ravishankar, Vinit; Kulmizev, Artur; Abdou, Mostafa; Søgaard, Anders; Nivre, Joakim.

EACL 2021 - 16th Conference of the European Chapter of the Association for Computational Linguistics, Proceedings of the Conference. Association for Computational Linguistics, 2021. p. 3031-3045.

Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

Harvard

Ravishankar, V, Kulmizev, A, Abdou, M, Søgaard, A & Nivre, J 2021, Attention can reflect syntactic structure (if you let it). in EACL 2021 - 16th Conference of the European Chapter of the Association for Computational Linguistics, Proceedings of the Conference. Association for Computational Linguistics, pp. 3031-3045, 16th Conference of the European Chapter of the Associationfor Computational Linguistics, EACL 2021, Virtual, Online, 19/04/2021.

APA

Ravishankar, V., Kulmizev, A., Abdou, M., Søgaard, A., & Nivre, J. (2021). Attention can reflect syntactic structure (if you let it). In EACL 2021 - 16th Conference of the European Chapter of the Association for Computational Linguistics, Proceedings of the Conference (pp. 3031-3045). Association for Computational Linguistics.

Vancouver

Ravishankar V, Kulmizev A, Abdou M, Søgaard A, Nivre J. Attention can reflect syntactic structure (if you let it). In EACL 2021 - 16th Conference of the European Chapter of the Association for Computational Linguistics, Proceedings of the Conference. Association for Computational Linguistics. 2021. p. 3031-3045

Author

Ravishankar, Vinit ; Kulmizev, Artur ; Abdou, Mostafa ; Søgaard, Anders ; Nivre, Joakim. / Attention can reflect syntactic structure (if you let it). EACL 2021 - 16th Conference of the European Chapter of the Association for Computational Linguistics, Proceedings of the Conference. Association for Computational Linguistics, 2021. pp. 3031-3045

Bibtex

@inproceedings{25c82efbcb82462cafb0b19e95cecd65,
title = "Attention can reflect syntactic structure (if you let it)",
abstract = "Since the popularization of the Transformer as a general-purpose feature encoder for NLP, many studies have attempted to decode linguistic structure from its novel multi-head attention mechanism. However, much of such work focused almost exclusively on English - a language with rigid word order and a lack of inflectional morphology. In this study, we present decoding experiments for multilingual BERT across 18 languages in order to test the generalizability of the claim that dependency syntax is reflected in attention patterns. We show that full trees can be decoded above baseline accuracy from single attention heads, and that individual relations are often tracked by the same heads across languages. Furthermore, in an attempt to address recent debates about the status of attention as an explanatory mechanism, we experiment with fine-tuning mBERT on a supervised parsing objective while freezing different series of parameters. Interestingly, in steering the objective to learn explicit linguistic structure, we find much of the same structure represented in the resulting attention patterns, with interesting differences with respect to which parameters are frozen.",
author = "Vinit Ravishankar and Artur Kulmizev and Mostafa Abdou and Anders S{\o}gaard and Joakim Nivre",
year = "2021",
language = "English",
pages = "3031--3045",
booktitle = "EACL 2021 - 16th Conference of the European Chapter of the Association for Computational Linguistics, Proceedings of the Conference",
publisher = "Association for Computational Linguistics",
note = "16th Conference of the European Chapter of the Associationfor Computational Linguistics, EACL 2021 ; Conference date: 19-04-2021 Through 23-04-2021",

}

RIS

TY - GEN

T1 - Attention can reflect syntactic structure (if you let it)

AU - Ravishankar, Vinit

AU - Kulmizev, Artur

AU - Abdou, Mostafa

AU - Søgaard, Anders

AU - Nivre, Joakim

PY - 2021

Y1 - 2021

N2 - Since the popularization of the Transformer as a general-purpose feature encoder for NLP, many studies have attempted to decode linguistic structure from its novel multi-head attention mechanism. However, much of such work focused almost exclusively on English - a language with rigid word order and a lack of inflectional morphology. In this study, we present decoding experiments for multilingual BERT across 18 languages in order to test the generalizability of the claim that dependency syntax is reflected in attention patterns. We show that full trees can be decoded above baseline accuracy from single attention heads, and that individual relations are often tracked by the same heads across languages. Furthermore, in an attempt to address recent debates about the status of attention as an explanatory mechanism, we experiment with fine-tuning mBERT on a supervised parsing objective while freezing different series of parameters. Interestingly, in steering the objective to learn explicit linguistic structure, we find much of the same structure represented in the resulting attention patterns, with interesting differences with respect to which parameters are frozen.

AB - Since the popularization of the Transformer as a general-purpose feature encoder for NLP, many studies have attempted to decode linguistic structure from its novel multi-head attention mechanism. However, much of such work focused almost exclusively on English - a language with rigid word order and a lack of inflectional morphology. In this study, we present decoding experiments for multilingual BERT across 18 languages in order to test the generalizability of the claim that dependency syntax is reflected in attention patterns. We show that full trees can be decoded above baseline accuracy from single attention heads, and that individual relations are often tracked by the same heads across languages. Furthermore, in an attempt to address recent debates about the status of attention as an explanatory mechanism, we experiment with fine-tuning mBERT on a supervised parsing objective while freezing different series of parameters. Interestingly, in steering the objective to learn explicit linguistic structure, we find much of the same structure represented in the resulting attention patterns, with interesting differences with respect to which parameters are frozen.

UR - http://www.scopus.com/inward/record.url?scp=85103335926&partnerID=8YFLogxK

M3 - Article in proceedings

AN - SCOPUS:85103335926

SP - 3031

EP - 3045

BT - EACL 2021 - 16th Conference of the European Chapter of the Association for Computational Linguistics, Proceedings of the Conference

PB - Association for Computational Linguistics

T2 - 16th Conference of the European Chapter of the Associationfor Computational Linguistics, EACL 2021

Y2 - 19 April 2021 through 23 April 2021

ER -

ID: 285250503