A4 Refereed article in a conference publication
Ubiquitous Distributed Deep Reinforcement Learning at the Edge: Analyzing Byzantine Agents in Discrete Action Spaces
Authors: Wenshuai Zhao, Jorge Peña Queralta, Li Qingqing, Tomi Westerlund
Editors: Elhadi M. Shakshuki, Ansar Yasar
Conference name: International Conference on Emerging Ubiquitous Systems and Pervasive Networks
Publication year: 2020
Journal: Procedia Computer Science
Book title : The 11th International Conference on Emerging Ubiquitous Systems and Pervasive Networks (EUSPN 2020) / The 10th International Conference on Current and Future Trends of Information and Communication Technologies in Healthcare (ICTH 2020) / Affiliated Works
Series title: Procedia Computer Science
Volume: 177
First page : 324
Last page: 329
ISSN: 1877-0509
DOI: https://doi.org/10.1016/j.procs.2020.10.043
Self-archived copy’s web address: https://research.utu.fi/converis/portal/Publication/51395531
The integration of edge computing in next-generation mobile networks is bringing low-latency and high-bandwidth ubiquitous connectivity to a myriad of cyber-physical systems. This will further boost the increasing intelligence that is being embedded at the edge in various types of autonomous systems, where collaborative machine learning has the potential to play a significant role. This paper discusses some of the challenges in multi-agent distributed deep reinforcement learning that can occur in the presence of byzantine or malfunctioning agents. As the simulation-to-reality gap gets bridged, the probability of malfunctions or errors must be taken into account. We show how wrong discrete actions can significantly affect the collaborative learning effort. In particular, we analyze the effect of having a fraction of agents that might perform the wrong action with a given probability. We study the ability of the system to converge towards a common working policy through the collaborative learning process based on the number of experiences from each of the agents to be aggregated for each policy update, together with the fraction of wrong actions from agents experiencing malfunctions. Our experiments are carried out in a simulation environment using the Atari testbed for the discrete action spaces, and advantage actor-critic (A2C) for the distributed multi-agent training.
Downloadable publication This is an electronic reprint of the original article. |