Minimax and Neyman–Pearson Meta-Learning for Outlier Languages

Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

Documents

Model-agnostic meta-learning (MAML) hasbeen recently put forth as a strategy to learnresource-poor languages in a sample-efficientfashion. Nevertheless, the properties of theselanguages are often not well represented bythose available during training. Hence, weargue that the i.i.d. assumption ingrained inMAML makes it ill-suited for cross-lingualNLP. In fact, under a decision-theoretic framework, MAML can be interpreted as minimising the expected risk across training languages(with a uniform prior), which is known asBayes criterion. To increase its robustness tooutlier languages, we create two variants ofMAML based on alternative criteria: MinimaxMAML reduces the maximum risk across languages, while Neyman–Pearson MAML constrains the risk in each language to a maximum threshold. Both criteria constitute fullydifferentiable two-player games. In light ofthis, we propose a new adaptive optimiser solving for a local approximation to their Nashequilibrium. We evaluate both model variants on two popular NLP tasks, part-of-speechtagging and question answering. We reportgains for their average and minimum performance across low-resource languages in zeroand few-shot settings, compared to joint multisource transfer and vanilla MAML. The codefor our experiments is available at https://github.com/rahular/robust-maml.
Original languageEnglish
Title of host publicationFindings of the Association for Computational Linguistics: ACL-IJCNLP 2021
PublisherAssociation for Computational Linguistics
Publication date2021
Pages1245-1260
DOIs
Publication statusPublished - 2021
EventFindings of the Association for Computational Linguistics: ACL-IJCNLP 2021 - Virtual, Online
Duration: 1 Aug 20216 Aug 2021

Conference

ConferenceFindings of the Association for Computational Linguistics: ACL-IJCNLP 2021
ByVirtual, Online
Periode01/08/202106/08/2021

ID: 300446234