Being Right for Whose Right Reasons?

Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

Standard

Being Right for Whose Right Reasons? / Thorn Jakobsen, Terne Sasha; Cabello, Laura; Søgaard, Anders.

Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics : Long Papers. Vol. 1 Association for Computational Linguistics (ACL), 2023. p. 1033-1054.

Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

Harvard

Thorn Jakobsen, TS, Cabello, L & Søgaard, A 2023, Being Right for Whose Right Reasons? in Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics : Long Papers. vol. 1, Association for Computational Linguistics (ACL), pp. 1033-1054, 61st Annual Meeting of the Association for Computational Linguistics, ACL 2023, Toronto, Canada, 09/07/2023. https://doi.org/10.18653/v1/2023.acl-long.59

APA

Thorn Jakobsen, T. S., Cabello, L., & Søgaard, A. (2023). Being Right for Whose Right Reasons? In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics : Long Papers (Vol. 1, pp. 1033-1054). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.acl-long.59

Vancouver

Thorn Jakobsen TS, Cabello L, Søgaard A. Being Right for Whose Right Reasons? In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics : Long Papers. Vol. 1. Association for Computational Linguistics (ACL). 2023. p. 1033-1054 https://doi.org/10.18653/v1/2023.acl-long.59

Author

Thorn Jakobsen, Terne Sasha ; Cabello, Laura ; Søgaard, Anders. / Being Right for Whose Right Reasons?. Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics : Long Papers. Vol. 1 Association for Computational Linguistics (ACL), 2023. pp. 1033-1054

Bibtex

@inproceedings{b11fa2ef96db4b2d82be1dc33a99d103,
title = "Being Right for Whose Right Reasons?",
abstract = "Explainability methods are used to benchmark the extent to which model predictions align with human rationales i.e., are {\textquoteleft}right for the right reasons{\textquoteright}. Previous work has failed to acknowledge, however, that what counts as a rationale is sometimes subjective. This paper presents what we think is a first of its kind, a collection of human rationale annotations augmented with the annotators demographic information. We cover three datasets spanning sentiment analysis and common-sense reasoning, and six demographic groups (balanced across age and ethnicity). Such data enables us to ask both what demographics our predictions align with and whose reasoning patterns our models{\textquoteright} rationales align with. We find systematic inter-group annotator disagreement and show how 16 Transformer-based models align better with rationales provided by certain demographic groups: We find that models are biased towards aligning best with older and/or white annotators. We zoom in on the effects of model size and model distillation, finding –contrary to our expectations– negative correlations between model size and rationale agreement as well as no evidence that either model size or model distillation improves fairness.",
author = "{Thorn Jakobsen}, {Terne Sasha} and Laura Cabello and Anders S{\o}gaard",
year = "2023",
doi = "10.18653/v1/2023.acl-long.59",
language = "English",
volume = "1",
pages = "1033--1054",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics",
publisher = "Association for Computational Linguistics (ACL)",
address = "United States",
note = "61st Annual Meeting of the Association for Computational Linguistics, ACL 2023 ; Conference date: 09-07-2023 Through 14-07-2023",

}

RIS

TY - GEN

T1 - Being Right for Whose Right Reasons?

AU - Thorn Jakobsen, Terne Sasha

AU - Cabello, Laura

AU - Søgaard, Anders

PY - 2023

Y1 - 2023

N2 - Explainability methods are used to benchmark the extent to which model predictions align with human rationales i.e., are ‘right for the right reasons’. Previous work has failed to acknowledge, however, that what counts as a rationale is sometimes subjective. This paper presents what we think is a first of its kind, a collection of human rationale annotations augmented with the annotators demographic information. We cover three datasets spanning sentiment analysis and common-sense reasoning, and six demographic groups (balanced across age and ethnicity). Such data enables us to ask both what demographics our predictions align with and whose reasoning patterns our models’ rationales align with. We find systematic inter-group annotator disagreement and show how 16 Transformer-based models align better with rationales provided by certain demographic groups: We find that models are biased towards aligning best with older and/or white annotators. We zoom in on the effects of model size and model distillation, finding –contrary to our expectations– negative correlations between model size and rationale agreement as well as no evidence that either model size or model distillation improves fairness.

AB - Explainability methods are used to benchmark the extent to which model predictions align with human rationales i.e., are ‘right for the right reasons’. Previous work has failed to acknowledge, however, that what counts as a rationale is sometimes subjective. This paper presents what we think is a first of its kind, a collection of human rationale annotations augmented with the annotators demographic information. We cover three datasets spanning sentiment analysis and common-sense reasoning, and six demographic groups (balanced across age and ethnicity). Such data enables us to ask both what demographics our predictions align with and whose reasoning patterns our models’ rationales align with. We find systematic inter-group annotator disagreement and show how 16 Transformer-based models align better with rationales provided by certain demographic groups: We find that models are biased towards aligning best with older and/or white annotators. We zoom in on the effects of model size and model distillation, finding –contrary to our expectations– negative correlations between model size and rationale agreement as well as no evidence that either model size or model distillation improves fairness.

U2 - 10.18653/v1/2023.acl-long.59

DO - 10.18653/v1/2023.acl-long.59

M3 - Article in proceedings

VL - 1

SP - 1033

EP - 1054

BT - Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics

PB - Association for Computational Linguistics (ACL)

T2 - 61st Annual Meeting of the Association for Computational Linguistics, ACL 2023

Y2 - 9 July 2023 through 14 July 2023

ER -

ID: 381636287