What does the Failure to Reason with “Respectively” in Zero/Few-Shot Settings Tell Us about Language Models?

Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

Documents

  • Fulltext

    Final published version, 311 KB, PDF document

Humans can effortlessly understand the coordinate structure of sentences such as “Niels Bohr and Kurt Cobain were born in Copenhagen and Seattle, respectively”. In the context of natural language inference (NLI), we examine how language models (LMs) reason with respective readings (Gawron and Kehler, 2004) from two perspectives: syntactic-semantic and commonsense-world knowledge. We propose a controlled synthetic dataset WikiResNLI and a naturally occurring dataset NatResNLI to encompass various explicit and implicit realizations of “respectively”. We show that fine-tuned NLI models struggle with understanding such readings without explicit supervision. While few-shot learning is easy in the presence of explicit cues, longer training is required when the reading is evoked implicitly, leaving models to rely on common sense inferences. Furthermore, our fine-grained analysis indicates models fail to generalize across different constructions. To conclude, we demonstrate that LMs still lag behind humans in generalizing to the long tail of linguistic constructions.

Original languageEnglish
Title of host publicationProceedings of the 61st Annual Meeting of the Association for Computational Linguistics : Long Papers
Volume1
PublisherAssociation for Computational Linguistics (ACL)
Publication date2023
Pages8786-8800
ISBN (Electronic)9781959429722
DOIs
Publication statusPublished - 2023
Event61st Annual Meeting of the Association for Computational Linguistics, ACL 2023 - Toronto, Canada
Duration: 9 Jul 202314 Jul 2023

Conference

Conference61st Annual Meeting of the Association for Computational Linguistics, ACL 2023
LandCanada
ByToronto
Periode09/07/202314/07/2023
SponsorBloomberg Engineering, et al., Google Research, Liveperson, Meta, Microsoft

Bibliographical note

Publisher Copyright:
© 2023 Association for Computational Linguistics.

ID: 371030992