Differential Privacy, Linguistic Fairness, and Training Data Influence: Impossibility and Possibility Theorems for Multilingual Language Models

Research output: Contribution to journalConference articleResearchpeer-review

Standard

Differential Privacy, Linguistic Fairness, and Training Data Influence : Impossibility and Possibility Theorems for Multilingual Language Models. / Rust, Phillip; Søgaard, Anders.

In: Proceedings of Machine Learning Research, Vol. 202, 2023, p. 29354-29387.

Research output: Contribution to journalConference articleResearchpeer-review

Harvard

Rust, P & Søgaard, A 2023, 'Differential Privacy, Linguistic Fairness, and Training Data Influence: Impossibility and Possibility Theorems for Multilingual Language Models', Proceedings of Machine Learning Research, vol. 202, pp. 29354-29387. <https://proceedings.mlr.press/v202/>

APA

Rust, P., & Søgaard, A. (2023). Differential Privacy, Linguistic Fairness, and Training Data Influence: Impossibility and Possibility Theorems for Multilingual Language Models. Proceedings of Machine Learning Research, 202, 29354-29387. https://proceedings.mlr.press/v202/

Vancouver

Rust P, Søgaard A. Differential Privacy, Linguistic Fairness, and Training Data Influence: Impossibility and Possibility Theorems for Multilingual Language Models. Proceedings of Machine Learning Research. 2023;202:29354-29387.

Author

Rust, Phillip ; Søgaard, Anders. / Differential Privacy, Linguistic Fairness, and Training Data Influence : Impossibility and Possibility Theorems for Multilingual Language Models. In: Proceedings of Machine Learning Research. 2023 ; Vol. 202. pp. 29354-29387.

Bibtex

@inproceedings{e2fbc1743b9d4eca9a764595f6a4dfdd,
title = "Differential Privacy, Linguistic Fairness, and Training Data Influence: Impossibility and Possibility Theorems for Multilingual Language Models",
abstract = "Language models such as mBERT, XLM-R, and BLOOM aim to achieve multilingual generalization or compression to facilitate transfer to a large number of (potentially unseen) languages. However, these models should ideally also be private, linguistically fair, and transparent, by relating their predictions to training data. Can these requirements be simultaneously satisfied? We show that multilingual compression and linguistic fairness are compatible with differential privacy, but that differential privacy is at odds with training data influence sparsity, an objective for transparency. We further present a series of experiments on two common NLP tasks and evaluate multilingual compression and training data influence sparsity under different privacy guarantees, exploring these trade-offs in more detail. Our results suggest that we need to develop ways to jointly optimize for these objectives in order to find practical trade-offs.",
author = "Phillip Rust and Anders S{\o}gaard",
note = "Publisher Copyright: {\textcopyright} 2023 Proceedings of Machine Learning Research. All rights reserved.; 40th International Conference on Machine Learning, ICML 2023 ; Conference date: 23-07-2023 Through 29-07-2023",
year = "2023",
language = "English",
volume = "202",
pages = "29354--29387",
journal = "Proceedings of Machine Learning Research",
issn = "2640-3498",

}

RIS

TY - GEN

T1 - Differential Privacy, Linguistic Fairness, and Training Data Influence

T2 - 40th International Conference on Machine Learning, ICML 2023

AU - Rust, Phillip

AU - Søgaard, Anders

N1 - Publisher Copyright: © 2023 Proceedings of Machine Learning Research. All rights reserved.

PY - 2023

Y1 - 2023

N2 - Language models such as mBERT, XLM-R, and BLOOM aim to achieve multilingual generalization or compression to facilitate transfer to a large number of (potentially unseen) languages. However, these models should ideally also be private, linguistically fair, and transparent, by relating their predictions to training data. Can these requirements be simultaneously satisfied? We show that multilingual compression and linguistic fairness are compatible with differential privacy, but that differential privacy is at odds with training data influence sparsity, an objective for transparency. We further present a series of experiments on two common NLP tasks and evaluate multilingual compression and training data influence sparsity under different privacy guarantees, exploring these trade-offs in more detail. Our results suggest that we need to develop ways to jointly optimize for these objectives in order to find practical trade-offs.

AB - Language models such as mBERT, XLM-R, and BLOOM aim to achieve multilingual generalization or compression to facilitate transfer to a large number of (potentially unseen) languages. However, these models should ideally also be private, linguistically fair, and transparent, by relating their predictions to training data. Can these requirements be simultaneously satisfied? We show that multilingual compression and linguistic fairness are compatible with differential privacy, but that differential privacy is at odds with training data influence sparsity, an objective for transparency. We further present a series of experiments on two common NLP tasks and evaluate multilingual compression and training data influence sparsity under different privacy guarantees, exploring these trade-offs in more detail. Our results suggest that we need to develop ways to jointly optimize for these objectives in order to find practical trade-offs.

M3 - Conference article

AN - SCOPUS:85174407325

VL - 202

SP - 29354

EP - 29387

JO - Proceedings of Machine Learning Research

JF - Proceedings of Machine Learning Research

SN - 2640-3498

Y2 - 23 July 2023 through 29 July 2023

ER -

ID: 372616344