Exploring the Unfairness of DP-SGD Across Settings

Research output: Contribution to conferencePaperResearch

Documents

  • Fulltext

    Final published version, 489 KB, PDF document

End users and regulators require private and fair artificial intelligence models, but previous work suggests these objectives may be at odds. We use the CivilComments to evaluate the impact of applying the {\em de facto} standard approach to privacy, DP-SGD, across several fairness metrics. We evaluate three implementations of DP-SGD: for dimensionality reduction (PCA), linear classification (logistic regression), and robust deep learning (Group-DRO). We establish a negative, logarithmic correlation between privacy and fairness in the case of linear classification and robust deep learning. DP-SGD had no significant impact on fairness for PCA, but upon inspection, also did not seem to lead to private representations.
Original languageEnglish
Publication date2022
Number of pages6
Publication statusPublished - 2022
EventThird AAAI Workshop on Privacy-Preserving Artificial Intelligence (PPAI-22) - VIRTUAL
Duration: 28 Feb 2022 → …

Conference

ConferenceThird AAAI Workshop on Privacy-Preserving Artificial Intelligence (PPAI-22)
CityVIRTUAL
Period28/02/2022 → …

ID: 341484877