Some Languages Seem Easier to Parse Because Their Treebanks Leak

Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

Documents

Cross-language differences in (universal) dependency parsing performance are mostly attributed to treebank size, average sentence length, average dependency length, morphological complexity, and domain differences. We point at a factor not previously discussed: If we abstract away from words and dependency labels, how many graphs in the test data were seen in the training data? We compute graph isomorphisms, and show that, treebank size aside, overlap between training and test graphs explain more of the observed variation than standard explanations such as the above.
Original languageEnglish
Title of host publicationProceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)
PublisherAssociation for Computational Linguistics
Publication date2020
Pages2765–2770
DOIs
Publication statusPublished - 2020
EventThe 2020 Conference on Empirical Methods in Natural Language Processing - online
Duration: 16 Nov 202020 Nov 2020
http://2020.emnlp.org

Conference

ConferenceThe 2020 Conference on Empirical Methods in Natural Language Processing
Locationonline
Periode16/11/202020/11/2020
Internetadresse

ID: 258390141