QLEVR: A Diagnostic Dataset for Quantificational Language and Elementary Visual Reasoning

Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

Documents

  • QLEVR

    Final published version, 16.4 MB, PDF document

Synthetic datasets have successfully been used to probe visual question-answering datasets for their reasoning abilities. CLEVR (Johnson et al., 2017), for example, tests a range of visual reasoning abilities. The questions in CLEVR focus on comparisons of shapes, colors, and sizes, numerical reasoning, and existence claims. This paper introduces a minimally biased, diagnostic visual questionanswering dataset, QLEVR, that goes beyond existential and numerical quantification and focus on more complex quantifiers and their combinations, e.g., asking whether there are more than two red balls that are smaller than at least three blue balls in an image. We describe how the dataset was created and present a first evaluation of state-of-the-art visual question-answering models, showing that QLEVR presents a formidable challenge to our current models. Code and Dataset are available at https://github.com/ zechenli03/QLEVR.

Original languageEnglish
Title of host publicationFindings of the Association for Computational Linguistics : NAACL 2022 - Findings
PublisherAssociation for Computational Linguistics (ACL)
Publication date2022
Pages980-996
ISBN (Electronic)9781955917766
DOIs
Publication statusPublished - 2022
Event2022 Findings of the Association for Computational Linguistics: NAACL 2022 - Seattle, United States
Duration: 10 Jul 202215 Jul 2022

Conference

Conference2022 Findings of the Association for Computational Linguistics: NAACL 2022
LandUnited States
BySeattle
Periode10/07/202215/07/2022

Bibliographical note

Publisher Copyright:
© Findings of the Association for Computational Linguistics: NAACL 2022 - Findings.

ID: 341493689