The Impact of Differential Privacy on Group Disparity Mitigation

Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

Standard

The Impact of Differential Privacy on Group Disparity Mitigation. / Petren Bach Hansen, Victor; Tejaswi Neerkaje, Atula; Sawhney, Ramit; Flek, Lucie; Sogaard, Anders.

Proceedings of the Fourth Workshop on Privacy in Natural Language Processing. Association for Computational Linguistics, 2022.

Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

Harvard

Petren Bach Hansen, V, Tejaswi Neerkaje, A, Sawhney, R, Flek, L & Sogaard, A 2022, The Impact of Differential Privacy on Group Disparity Mitigation. in Proceedings of the Fourth Workshop on Privacy in Natural Language Processing. Association for Computational Linguistics, 4th Workshop on Privacy in Natural Language Processing, Seattle, United States, 01/07/2022. https://doi.org/10.18653/v1/2022.privatenlp-1.2

APA

Petren Bach Hansen, V., Tejaswi Neerkaje, A., Sawhney, R., Flek, L., & Sogaard, A. (2022). The Impact of Differential Privacy on Group Disparity Mitigation. In Proceedings of the Fourth Workshop on Privacy in Natural Language Processing Association for Computational Linguistics. https://doi.org/10.18653/v1/2022.privatenlp-1.2

Vancouver

Petren Bach Hansen V, Tejaswi Neerkaje A, Sawhney R, Flek L, Sogaard A. The Impact of Differential Privacy on Group Disparity Mitigation. In Proceedings of the Fourth Workshop on Privacy in Natural Language Processing. Association for Computational Linguistics. 2022 https://doi.org/10.18653/v1/2022.privatenlp-1.2

Author

Petren Bach Hansen, Victor ; Tejaswi Neerkaje, Atula ; Sawhney, Ramit ; Flek, Lucie ; Sogaard, Anders. / The Impact of Differential Privacy on Group Disparity Mitigation. Proceedings of the Fourth Workshop on Privacy in Natural Language Processing. Association for Computational Linguistics, 2022.

Bibtex

@inproceedings{4b547f7aa44a4784b0f7a3cb76fc3aad,
title = "The Impact of Differential Privacy on Group Disparity Mitigation",
abstract = "The performance cost of differential privacy has, for some applications, been shown to be higher for minority groups fairness, conversely, has been shown to disproportionally compromise the privacy of members of such groups. Most work in this area has been restricted to computer vision and risk assessment. In this paper, we evaluate the impact of differential privacy on fairness across four tasks, focusing on how attempts to mitigate privacy violations and between-group performance differences interact Does privacy inhibit attempts to ensure fairness? To this end, we train epsilon, delta-differentially private models with empirical risk minimization and group distributionally robust training objectives. Consistent with previous findings, we find that differential privacy increases between-group performance differences in the baseline setting but more interestingly, differential privacy reduces between-group performance differences in the robust setting. We explain this by reinterpreting differential privacy as regularization.",
author = "{Petren Bach Hansen}, Victor and {Tejaswi Neerkaje}, Atula and Ramit Sawhney and Lucie Flek and Anders Sogaard",
year = "2022",
doi = "10.18653/v1/2022.privatenlp-1.2",
language = "English",
booktitle = "Proceedings of the Fourth Workshop on Privacy in Natural Language Processing",
publisher = "Association for Computational Linguistics",
note = "4th Workshop on Privacy in Natural Language Processing ; Conference date: 01-07-2022 Through 01-07-2022",

}

RIS

TY - GEN

T1 - The Impact of Differential Privacy on Group Disparity Mitigation

AU - Petren Bach Hansen, Victor

AU - Tejaswi Neerkaje, Atula

AU - Sawhney, Ramit

AU - Flek, Lucie

AU - Sogaard, Anders

PY - 2022

Y1 - 2022

N2 - The performance cost of differential privacy has, for some applications, been shown to be higher for minority groups fairness, conversely, has been shown to disproportionally compromise the privacy of members of such groups. Most work in this area has been restricted to computer vision and risk assessment. In this paper, we evaluate the impact of differential privacy on fairness across four tasks, focusing on how attempts to mitigate privacy violations and between-group performance differences interact Does privacy inhibit attempts to ensure fairness? To this end, we train epsilon, delta-differentially private models with empirical risk minimization and group distributionally robust training objectives. Consistent with previous findings, we find that differential privacy increases between-group performance differences in the baseline setting but more interestingly, differential privacy reduces between-group performance differences in the robust setting. We explain this by reinterpreting differential privacy as regularization.

AB - The performance cost of differential privacy has, for some applications, been shown to be higher for minority groups fairness, conversely, has been shown to disproportionally compromise the privacy of members of such groups. Most work in this area has been restricted to computer vision and risk assessment. In this paper, we evaluate the impact of differential privacy on fairness across four tasks, focusing on how attempts to mitigate privacy violations and between-group performance differences interact Does privacy inhibit attempts to ensure fairness? To this end, we train epsilon, delta-differentially private models with empirical risk minimization and group distributionally robust training objectives. Consistent with previous findings, we find that differential privacy increases between-group performance differences in the baseline setting but more interestingly, differential privacy reduces between-group performance differences in the robust setting. We explain this by reinterpreting differential privacy as regularization.

U2 - 10.18653/v1/2022.privatenlp-1.2

DO - 10.18653/v1/2022.privatenlp-1.2

M3 - Article in proceedings

BT - Proceedings of the Fourth Workshop on Privacy in Natural Language Processing

PB - Association for Computational Linguistics

T2 - 4th Workshop on Privacy in Natural Language Processing

Y2 - 1 July 2022 through 1 July 2022

ER -

ID: 341493148