Zero-Shot Dependency Parsing with Worst-Case Aware Automated Curriculum Learning

Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

Standard

Zero-Shot Dependency Parsing with Worst-Case Aware Automated Curriculum Learning. / de Lhoneux, Miryam; Zhang, Sheng; Søgaard, Anders.

ACL 2022 - 60th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference (Short Papers). ed. / Smaranda Muresan; Preslav Nakov; Aline Villavicencio. Association for Computational Linguistics (ACL), 2022. p. 578-587.

Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

Harvard

de Lhoneux, M, Zhang, S & Søgaard, A 2022, Zero-Shot Dependency Parsing with Worst-Case Aware Automated Curriculum Learning. in S Muresan, P Nakov & A Villavicencio (eds), ACL 2022 - 60th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference (Short Papers). Association for Computational Linguistics (ACL), pp. 578-587, 60th Annual Meeting of the Association for Computational Linguistics, ACL 2022, Dublin, Ireland, 22/05/2022. https://doi.org/10.18653/v1/2022.acl-short.64

APA

de Lhoneux, M., Zhang, S., & Søgaard, A. (2022). Zero-Shot Dependency Parsing with Worst-Case Aware Automated Curriculum Learning. In S. Muresan, P. Nakov, & A. Villavicencio (Eds.), ACL 2022 - 60th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference (Short Papers) (pp. 578-587). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.acl-short.64

Vancouver

de Lhoneux M, Zhang S, Søgaard A. Zero-Shot Dependency Parsing with Worst-Case Aware Automated Curriculum Learning. In Muresan S, Nakov P, Villavicencio A, editors, ACL 2022 - 60th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference (Short Papers). Association for Computational Linguistics (ACL). 2022. p. 578-587 https://doi.org/10.18653/v1/2022.acl-short.64

Author

de Lhoneux, Miryam ; Zhang, Sheng ; Søgaard, Anders. / Zero-Shot Dependency Parsing with Worst-Case Aware Automated Curriculum Learning. ACL 2022 - 60th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference (Short Papers). editor / Smaranda Muresan ; Preslav Nakov ; Aline Villavicencio. Association for Computational Linguistics (ACL), 2022. pp. 578-587

Bibtex

@inproceedings{2036e0870ba8402a892db0ecf27c4c2c,
title = "Zero-Shot Dependency Parsing with Worst-Case Aware Automated Curriculum Learning",
abstract = "Large multilingual pretrained language models such as mBERT and XLM-RoBERTa have been found to be surprisingly effective for cross-lingual transfer of syntactic parsing models (Wu and Dredze, 2019), but only between related languages. However, source and training languages are rarely related, when parsing truly low-resource languages. To close this gap, we adopt a method from multi-task learning, which relies on automated curriculum learning, to dynamically optimize for parsing performance on outlier languages. We show that this approach is significantly better than uniform and size-proportional sampling in the zero-shot setting.",
author = "{de Lhoneux}, Miryam and Sheng Zhang and Anders S{\o}gaard",
note = "Publisher Copyright: {\textcopyright} 2022 Association for Computational Linguistics.; 60th Annual Meeting of the Association for Computational Linguistics, ACL 2022 ; Conference date: 22-05-2022 Through 27-05-2022",
year = "2022",
doi = "10.18653/v1/2022.acl-short.64",
language = "English",
pages = "578--587",
editor = "Smaranda Muresan and Preslav Nakov and Aline Villavicencio",
booktitle = "ACL 2022 - 60th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference (Short Papers)",
publisher = "Association for Computational Linguistics (ACL)",
address = "United States",

}

RIS

TY - GEN

T1 - Zero-Shot Dependency Parsing with Worst-Case Aware Automated Curriculum Learning

AU - de Lhoneux, Miryam

AU - Zhang, Sheng

AU - Søgaard, Anders

N1 - Publisher Copyright: © 2022 Association for Computational Linguistics.

PY - 2022

Y1 - 2022

N2 - Large multilingual pretrained language models such as mBERT and XLM-RoBERTa have been found to be surprisingly effective for cross-lingual transfer of syntactic parsing models (Wu and Dredze, 2019), but only between related languages. However, source and training languages are rarely related, when parsing truly low-resource languages. To close this gap, we adopt a method from multi-task learning, which relies on automated curriculum learning, to dynamically optimize for parsing performance on outlier languages. We show that this approach is significantly better than uniform and size-proportional sampling in the zero-shot setting.

AB - Large multilingual pretrained language models such as mBERT and XLM-RoBERTa have been found to be surprisingly effective for cross-lingual transfer of syntactic parsing models (Wu and Dredze, 2019), but only between related languages. However, source and training languages are rarely related, when parsing truly low-resource languages. To close this gap, we adopt a method from multi-task learning, which relies on automated curriculum learning, to dynamically optimize for parsing performance on outlier languages. We show that this approach is significantly better than uniform and size-proportional sampling in the zero-shot setting.

UR - http://www.scopus.com/inward/record.url?scp=85127763115&partnerID=8YFLogxK

U2 - 10.18653/v1/2022.acl-short.64

DO - 10.18653/v1/2022.acl-short.64

M3 - Article in proceedings

AN - SCOPUS:85127763115

SP - 578

EP - 587

BT - ACL 2022 - 60th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference (Short Papers)

A2 - Muresan, Smaranda

A2 - Nakov, Preslav

A2 - Villavicencio, Aline

PB - Association for Computational Linguistics (ACL)

T2 - 60th Annual Meeting of the Association for Computational Linguistics, ACL 2022

Y2 - 22 May 2022 through 27 May 2022

ER -

ID: 341490429