A1 Refereed original research article in a scientific journal
Fairness and privacy preserving in federated learning: A survey
Authors: Rafi Taki Hasan, Noor Faiza Anan, Hussain Tahmid, Chae Dong-Kyu
Publisher: ELSEVIER
Publishing place: AMSTERDAM
Publication year: 2024
Journal: Information Fusion
Journal name in source: INFORMATION FUSION
Journal acronym: INFORM FUSION
Article number: 102198
Volume: 105
Number of pages: 26
ISSN: 1566-2535
eISSN: 1872-6305
DOI: https://doi.org/10.1016/j.inffus.2023.102198
Preprint address: https://arxiv.org/abs/2306.08402
Abstract
Federated Learning (FL) is an increasingly popular form of distributed machine learning that addresses privacy concerns by allowing participants to collaboratively train machine learning models without exchanging their private data. Although FL emerged as a privacy-preserving alternative to centralized machine learning approaches, it faces significant challenges in preserving the privacy of its clients and mitigating potential bias against clients or disadvantaged groups. Most existing research in FL has addressed these two ethical notions separately, whereas ensuring privacy and fairness simultaneously in FL systems is of paramount importance. Moreover, current research efforts fail to balance privacy, fairness, and model performance, leaving systems vulnerable to various problems. To provide a comprehensive overview of these critical challenges, this work presents an integrated study of privacy and fairness concerns in the context of FL. In addition to providing an extensive review of the current literature on privacy and fairness issues, we also examine the existing approaches for achieving a balance between these two ethical notions to develop robust FL systems. Finally, we highlight potential research directions related to the challenges of implementing privacy-preserving and fairness-aware FL systems.
Federated Learning (FL) is an increasingly popular form of distributed machine learning that addresses privacy concerns by allowing participants to collaboratively train machine learning models without exchanging their private data. Although FL emerged as a privacy-preserving alternative to centralized machine learning approaches, it faces significant challenges in preserving the privacy of its clients and mitigating potential bias against clients or disadvantaged groups. Most existing research in FL has addressed these two ethical notions separately, whereas ensuring privacy and fairness simultaneously in FL systems is of paramount importance. Moreover, current research efforts fail to balance privacy, fairness, and model performance, leaving systems vulnerable to various problems. To provide a comprehensive overview of these critical challenges, this work presents an integrated study of privacy and fairness concerns in the context of FL. In addition to providing an extensive review of the current literature on privacy and fairness issues, we also examine the existing approaches for achieving a balance between these two ethical notions to develop robust FL systems. Finally, we highlight potential research directions related to the challenges of implementing privacy-preserving and fairness-aware FL systems.