Differential Privacy, Linguistic Fairness, and Training Data Influence: Impossibility and Possibility Theorems for Multilingual Language Models
Research output: Contribution to journal › Conference article › Research › peer-review
Standard
Differential Privacy, Linguistic Fairness, and Training Data Influence : Impossibility and Possibility Theorems for Multilingual Language Models. / Rust, Phillip; Søgaard, Anders.
In: Proceedings of Machine Learning Research, Vol. 202, 2023, p. 29354-29387.Research output: Contribution to journal › Conference article › Research › peer-review
Harvard
APA
Vancouver
Author
Bibtex
}
RIS
TY - GEN
T1 - Differential Privacy, Linguistic Fairness, and Training Data Influence
T2 - 40th International Conference on Machine Learning, ICML 2023
AU - Rust, Phillip
AU - Søgaard, Anders
N1 - Publisher Copyright: © 2023 Proceedings of Machine Learning Research. All rights reserved.
PY - 2023
Y1 - 2023
N2 - Language models such as mBERT, XLM-R, and BLOOM aim to achieve multilingual generalization or compression to facilitate transfer to a large number of (potentially unseen) languages. However, these models should ideally also be private, linguistically fair, and transparent, by relating their predictions to training data. Can these requirements be simultaneously satisfied? We show that multilingual compression and linguistic fairness are compatible with differential privacy, but that differential privacy is at odds with training data influence sparsity, an objective for transparency. We further present a series of experiments on two common NLP tasks and evaluate multilingual compression and training data influence sparsity under different privacy guarantees, exploring these trade-offs in more detail. Our results suggest that we need to develop ways to jointly optimize for these objectives in order to find practical trade-offs.
AB - Language models such as mBERT, XLM-R, and BLOOM aim to achieve multilingual generalization or compression to facilitate transfer to a large number of (potentially unseen) languages. However, these models should ideally also be private, linguistically fair, and transparent, by relating their predictions to training data. Can these requirements be simultaneously satisfied? We show that multilingual compression and linguistic fairness are compatible with differential privacy, but that differential privacy is at odds with training data influence sparsity, an objective for transparency. We further present a series of experiments on two common NLP tasks and evaluate multilingual compression and training data influence sparsity under different privacy guarantees, exploring these trade-offs in more detail. Our results suggest that we need to develop ways to jointly optimize for these objectives in order to find practical trade-offs.
M3 - Conference article
AN - SCOPUS:85174407325
VL - 202
SP - 29354
EP - 29387
JO - Proceedings of Machine Learning Research
JF - Proceedings of Machine Learning Research
SN - 2640-3498
Y2 - 23 July 2023 through 29 July 2023
ER -
ID: 372616344