A1 Vertaisarvioitu alkuperäisartikkeli tieteellisessä lehdessä
Fairness and privacy preserving in federated learning: A survey
Tekijät: Rafi Taki Hasan, Noor Faiza Anan, Hussain Tahmid, Chae Dong-Kyu
Kustantaja: ELSEVIER
Kustannuspaikka: AMSTERDAM
Julkaisuvuosi: 2024
Journal: Information Fusion
Tietokannassa oleva lehden nimi: INFORMATION FUSION
Lehden akronyymi: INFORM FUSION
Artikkelin numero: 102198
Vuosikerta: 105
Sivujen määrä: 26
ISSN: 1566-2535
eISSN: 1872-6305
DOI: https://doi.org/10.1016/j.inffus.2023.102198
Preprintin osoite: https://arxiv.org/abs/2306.08402
Tiivistelmä
Federated Learning (FL) is an increasingly popular form of distributed machine learning that addresses privacy concerns by allowing participants to collaboratively train machine learning models without exchanging their private data. Although FL emerged as a privacy-preserving alternative to centralized machine learning approaches, it faces significant challenges in preserving the privacy of its clients and mitigating potential bias against clients or disadvantaged groups. Most existing research in FL has addressed these two ethical notions separately, whereas ensuring privacy and fairness simultaneously in FL systems is of paramount importance. Moreover, current research efforts fail to balance privacy, fairness, and model performance, leaving systems vulnerable to various problems. To provide a comprehensive overview of these critical challenges, this work presents an integrated study of privacy and fairness concerns in the context of FL. In addition to providing an extensive review of the current literature on privacy and fairness issues, we also examine the existing approaches for achieving a balance between these two ethical notions to develop robust FL systems. Finally, we highlight potential research directions related to the challenges of implementing privacy-preserving and fairness-aware FL systems.
Federated Learning (FL) is an increasingly popular form of distributed machine learning that addresses privacy concerns by allowing participants to collaboratively train machine learning models without exchanging their private data. Although FL emerged as a privacy-preserving alternative to centralized machine learning approaches, it faces significant challenges in preserving the privacy of its clients and mitigating potential bias against clients or disadvantaged groups. Most existing research in FL has addressed these two ethical notions separately, whereas ensuring privacy and fairness simultaneously in FL systems is of paramount importance. Moreover, current research efforts fail to balance privacy, fairness, and model performance, leaving systems vulnerable to various problems. To provide a comprehensive overview of these critical challenges, this work presents an integrated study of privacy and fairness concerns in the context of FL. In addition to providing an extensive review of the current literature on privacy and fairness issues, we also examine the existing approaches for achieving a balance between these two ethical notions to develop robust FL systems. Finally, we highlight potential research directions related to the challenges of implementing privacy-preserving and fairness-aware FL systems.