What does the Failure to Reason with “Respectively” in Zero/Few-Shot Settings Tell Us about Language Models?
Research output: Chapter in Book/Report/Conference proceeding › Article in proceedings › Research › peer-review
Standard
What does the Failure to Reason with “Respectively” in Zero/Few-Shot Settings Tell Us about Language Models? / Cui, Ruixiang; Lee, Seolhwa; Hershcovich, Daniel; Søgaard, Anders.
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics: Long Papers. Vol. 1 Association for Computational Linguistics (ACL), 2023. p. 8786-8800.Research output: Chapter in Book/Report/Conference proceeding › Article in proceedings › Research › peer-review
Harvard
APA
Vancouver
Author
Bibtex
}
RIS
TY - GEN
T1 - What does the Failure to Reason with “Respectively” in Zero/Few-Shot Settings Tell Us about Language Models?
AU - Cui, Ruixiang
AU - Lee, Seolhwa
AU - Hershcovich, Daniel
AU - Søgaard, Anders
N1 - Publisher Copyright: © 2023 Association for Computational Linguistics.
PY - 2023
Y1 - 2023
N2 - Humans can effortlessly understand the coordinate structure of sentences such as “Niels Bohr and Kurt Cobain were born in Copenhagen and Seattle, respectively”. In the context of natural language inference (NLI), we examine how language models (LMs) reason with respective readings (Gawron and Kehler, 2004) from two perspectives: syntactic-semantic and commonsense-world knowledge. We propose a controlled synthetic dataset WikiResNLI and a naturally occurring dataset NatResNLI to encompass various explicit and implicit realizations of “respectively”. We show that fine-tuned NLI models struggle with understanding such readings without explicit supervision. While few-shot learning is easy in the presence of explicit cues, longer training is required when the reading is evoked implicitly, leaving models to rely on common sense inferences. Furthermore, our fine-grained analysis indicates models fail to generalize across different constructions. To conclude, we demonstrate that LMs still lag behind humans in generalizing to the long tail of linguistic constructions.
AB - Humans can effortlessly understand the coordinate structure of sentences such as “Niels Bohr and Kurt Cobain were born in Copenhagen and Seattle, respectively”. In the context of natural language inference (NLI), we examine how language models (LMs) reason with respective readings (Gawron and Kehler, 2004) from two perspectives: syntactic-semantic and commonsense-world knowledge. We propose a controlled synthetic dataset WikiResNLI and a naturally occurring dataset NatResNLI to encompass various explicit and implicit realizations of “respectively”. We show that fine-tuned NLI models struggle with understanding such readings without explicit supervision. While few-shot learning is easy in the presence of explicit cues, longer training is required when the reading is evoked implicitly, leaving models to rely on common sense inferences. Furthermore, our fine-grained analysis indicates models fail to generalize across different constructions. To conclude, we demonstrate that LMs still lag behind humans in generalizing to the long tail of linguistic constructions.
U2 - 10.18653/v1/2023.acl-long.489
DO - 10.18653/v1/2023.acl-long.489
M3 - Article in proceedings
AN - SCOPUS:85174409678
VL - 1
SP - 8786
EP - 8800
BT - Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics
PB - Association for Computational Linguistics (ACL)
T2 - 61st Annual Meeting of the Association for Computational Linguistics, ACL 2023
Y2 - 9 July 2023 through 14 July 2023
ER -
ID: 371030992