Factual Consistency of Multilingual Pretrained Language Models

Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

Standard

Factual Consistency of Multilingual Pretrained Language Models. / Fierro, Constanza; Søgaard, Anders.

ACL 2022 - 60th Annual Meeting of the Association for Computational Linguistics, Findings of ACL 2022. ed. / Smaranda Muresan; Preslav Nakov; Aline Villavicencio. Association for Computational Linguistics (ACL), 2022. p. 3046-3052.

Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

Harvard

Fierro, C & Søgaard, A 2022, Factual Consistency of Multilingual Pretrained Language Models. in S Muresan, P Nakov & A Villavicencio (eds), ACL 2022 - 60th Annual Meeting of the Association for Computational Linguistics, Findings of ACL 2022. Association for Computational Linguistics (ACL), pp. 3046-3052, 60th Annual Meeting of the Association for Computational Linguistics, ACL 2022, Dublin, Ireland, 22/05/2022.

APA

Fierro, C., & Søgaard, A. (2022). Factual Consistency of Multilingual Pretrained Language Models. In S. Muresan, P. Nakov, & A. Villavicencio (Eds.), ACL 2022 - 60th Annual Meeting of the Association for Computational Linguistics, Findings of ACL 2022 (pp. 3046-3052). Association for Computational Linguistics (ACL).

Vancouver

Fierro C, Søgaard A. Factual Consistency of Multilingual Pretrained Language Models. In Muresan S, Nakov P, Villavicencio A, editors, ACL 2022 - 60th Annual Meeting of the Association for Computational Linguistics, Findings of ACL 2022. Association for Computational Linguistics (ACL). 2022. p. 3046-3052

Author

Fierro, Constanza ; Søgaard, Anders. / Factual Consistency of Multilingual Pretrained Language Models. ACL 2022 - 60th Annual Meeting of the Association for Computational Linguistics, Findings of ACL 2022. editor / Smaranda Muresan ; Preslav Nakov ; Aline Villavicencio. Association for Computational Linguistics (ACL), 2022. pp. 3046-3052

Bibtex

@inproceedings{3c25ebf64a9c4f5da7665a6533b8f768,
title = "Factual Consistency of Multilingual Pretrained Language Models",
abstract = "Pretrained language models can be queried for factual knowledge, with potential applications in knowledge base acquisition and tasks that require inference. However, for that, we need to know how reliable this knowledge is, and recent work has shown that monolingual English language models lack consistency when predicting factual knowledge, that is, they fill-in-the-blank differently for paraphrases describing the same fact. In this paper, we extend the analysis of consistency to a multilingual setting. We introduce a resource, MPARAREL, and investigate (i) whether multilingual language models such as mBERT and XLM-R are more consistent than their monolingual counterparts; and (ii) if such models are equally consistent across languages. We find that mBERT is as inconsistent as English BERT in English paraphrases, but that both mBERT and XLM-R exhibit a high degree of inconsistency in English and even more so for all the other 45 languages.",
author = "Constanza Fierro and Anders S{\o}gaard",
note = "Publisher Copyright: {\textcopyright} 2022 Association for Computational Linguistics.; 60th Annual Meeting of the Association for Computational Linguistics, ACL 2022 ; Conference date: 22-05-2022 Through 27-05-2022",
year = "2022",
language = "English",
pages = "3046--3052",
editor = "Smaranda Muresan and Preslav Nakov and Aline Villavicencio",
booktitle = "ACL 2022 - 60th Annual Meeting of the Association for Computational Linguistics, Findings of ACL 2022",
publisher = "Association for Computational Linguistics (ACL)",
address = "United States",

}

RIS

TY - GEN

T1 - Factual Consistency of Multilingual Pretrained Language Models

AU - Fierro, Constanza

AU - Søgaard, Anders

N1 - Publisher Copyright: © 2022 Association for Computational Linguistics.

PY - 2022

Y1 - 2022

N2 - Pretrained language models can be queried for factual knowledge, with potential applications in knowledge base acquisition and tasks that require inference. However, for that, we need to know how reliable this knowledge is, and recent work has shown that monolingual English language models lack consistency when predicting factual knowledge, that is, they fill-in-the-blank differently for paraphrases describing the same fact. In this paper, we extend the analysis of consistency to a multilingual setting. We introduce a resource, MPARAREL, and investigate (i) whether multilingual language models such as mBERT and XLM-R are more consistent than their monolingual counterparts; and (ii) if such models are equally consistent across languages. We find that mBERT is as inconsistent as English BERT in English paraphrases, but that both mBERT and XLM-R exhibit a high degree of inconsistency in English and even more so for all the other 45 languages.

AB - Pretrained language models can be queried for factual knowledge, with potential applications in knowledge base acquisition and tasks that require inference. However, for that, we need to know how reliable this knowledge is, and recent work has shown that monolingual English language models lack consistency when predicting factual knowledge, that is, they fill-in-the-blank differently for paraphrases describing the same fact. In this paper, we extend the analysis of consistency to a multilingual setting. We introduce a resource, MPARAREL, and investigate (i) whether multilingual language models such as mBERT and XLM-R are more consistent than their monolingual counterparts; and (ii) if such models are equally consistent across languages. We find that mBERT is as inconsistent as English BERT in English paraphrases, but that both mBERT and XLM-R exhibit a high degree of inconsistency in English and even more so for all the other 45 languages.

UR - http://www.scopus.com/inward/record.url?scp=85149112780&partnerID=8YFLogxK

M3 - Article in proceedings

AN - SCOPUS:85149112780

SP - 3046

EP - 3052

BT - ACL 2022 - 60th Annual Meeting of the Association for Computational Linguistics, Findings of ACL 2022

A2 - Muresan, Smaranda

A2 - Nakov, Preslav

A2 - Villavicencio, Aline

PB - Association for Computational Linguistics (ACL)

T2 - 60th Annual Meeting of the Association for Computational Linguistics, ACL 2022

Y2 - 22 May 2022 through 27 May 2022

ER -

ID: 341485866