On the Independence of Association Bias and Empirical Fairness in Language Models

Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

Standard

On the Independence of Association Bias and Empirical Fairness in Language Models. / Cabello, Laura; Jørgensen, Anna Katrine; Søgaard, Anders.

Proceedings of the 6th ACM Conference on Fairness, Accountability, and Transparency, FAccT 2023. Association for Computing Machinery, Inc., 2023. p. 370-378.

Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

Harvard

Cabello, L, Jørgensen, AK & Søgaard, A 2023, On the Independence of Association Bias and Empirical Fairness in Language Models. in Proceedings of the 6th ACM Conference on Fairness, Accountability, and Transparency, FAccT 2023. Association for Computing Machinery, Inc., pp. 370-378, 6th ACM Conference on Fairness, Accountability, and Transparency, FAccT 2023, Chicago, United States, 12/06/2023. https://doi.org/10.1145/3593013.3594004

APA

Cabello, L., Jørgensen, A. K., & Søgaard, A. (2023). On the Independence of Association Bias and Empirical Fairness in Language Models. In Proceedings of the 6th ACM Conference on Fairness, Accountability, and Transparency, FAccT 2023 (pp. 370-378). Association for Computing Machinery, Inc.. https://doi.org/10.1145/3593013.3594004

Vancouver

Cabello L, Jørgensen AK, Søgaard A. On the Independence of Association Bias and Empirical Fairness in Language Models. In Proceedings of the 6th ACM Conference on Fairness, Accountability, and Transparency, FAccT 2023. Association for Computing Machinery, Inc. 2023. p. 370-378 https://doi.org/10.1145/3593013.3594004

Author

Cabello, Laura ; Jørgensen, Anna Katrine ; Søgaard, Anders. / On the Independence of Association Bias and Empirical Fairness in Language Models. Proceedings of the 6th ACM Conference on Fairness, Accountability, and Transparency, FAccT 2023. Association for Computing Machinery, Inc., 2023. pp. 370-378

Bibtex

@inproceedings{51de5c588a974714abeb5688ebb2e496,
title = "On the Independence of Association Bias and Empirical Fairness in Language Models",
abstract = "The societal impact of pre-trained language models has prompted researchers to probe them for strong associations between protected attributes and value-loaded terms, from slur to prestigious job titles. Such work is said to probe models for bias or fairness - or such probes 'into representational biases' are said to be 'motivated by fairness' - suggesting an intimate connection between bias and fairness. We provide conceptual clarity by distinguishing between association biases [11] and empirical fairness [56] and show the two can be independent. Our main contribution, however, is showing why this should not come as a surprise. To this end, we first provide a thought experiment, showing how association bias and empirical fairness can be completely orthogonal. Next, we provide empirical evidence that there is no correlation between bias metrics and fairness metrics across the most widely used language models. Finally, we survey the sociological and psychological literature and show how this literature provides ample support for expecting these metrics to be uncorrelated.",
keywords = "Fairness, Natural Language Processing, Representational Bias",
author = "Laura Cabello and J{\o}rgensen, {Anna Katrine} and Anders S{\o}gaard",
note = "Publisher Copyright: {\textcopyright} 2023 ACM.; 6th ACM Conference on Fairness, Accountability, and Transparency, FAccT 2023 ; Conference date: 12-06-2023 Through 15-06-2023",
year = "2023",
doi = "10.1145/3593013.3594004",
language = "English",
pages = "370--378",
booktitle = "Proceedings of the 6th ACM Conference on Fairness, Accountability, and Transparency, FAccT 2023",
publisher = "Association for Computing Machinery, Inc.",

}

RIS

TY - GEN

T1 - On the Independence of Association Bias and Empirical Fairness in Language Models

AU - Cabello, Laura

AU - Jørgensen, Anna Katrine

AU - Søgaard, Anders

N1 - Publisher Copyright: © 2023 ACM.

PY - 2023

Y1 - 2023

N2 - The societal impact of pre-trained language models has prompted researchers to probe them for strong associations between protected attributes and value-loaded terms, from slur to prestigious job titles. Such work is said to probe models for bias or fairness - or such probes 'into representational biases' are said to be 'motivated by fairness' - suggesting an intimate connection between bias and fairness. We provide conceptual clarity by distinguishing between association biases [11] and empirical fairness [56] and show the two can be independent. Our main contribution, however, is showing why this should not come as a surprise. To this end, we first provide a thought experiment, showing how association bias and empirical fairness can be completely orthogonal. Next, we provide empirical evidence that there is no correlation between bias metrics and fairness metrics across the most widely used language models. Finally, we survey the sociological and psychological literature and show how this literature provides ample support for expecting these metrics to be uncorrelated.

AB - The societal impact of pre-trained language models has prompted researchers to probe them for strong associations between protected attributes and value-loaded terms, from slur to prestigious job titles. Such work is said to probe models for bias or fairness - or such probes 'into representational biases' are said to be 'motivated by fairness' - suggesting an intimate connection between bias and fairness. We provide conceptual clarity by distinguishing between association biases [11] and empirical fairness [56] and show the two can be independent. Our main contribution, however, is showing why this should not come as a surprise. To this end, we first provide a thought experiment, showing how association bias and empirical fairness can be completely orthogonal. Next, we provide empirical evidence that there is no correlation between bias metrics and fairness metrics across the most widely used language models. Finally, we survey the sociological and psychological literature and show how this literature provides ample support for expecting these metrics to be uncorrelated.

KW - Fairness

KW - Natural Language Processing

KW - Representational Bias

U2 - 10.1145/3593013.3594004

DO - 10.1145/3593013.3594004

M3 - Article in proceedings

AN - SCOPUS:85163629760

SP - 370

EP - 378

BT - Proceedings of the 6th ACM Conference on Fairness, Accountability, and Transparency, FAccT 2023

PB - Association for Computing Machinery, Inc.

T2 - 6th ACM Conference on Fairness, Accountability, and Transparency, FAccT 2023

Y2 - 12 June 2023 through 15 June 2023

ER -

ID: 381563506