A4 Vertaisarvioitu artikkeli konferenssijulkaisussa
Ubiquitous Distributed Deep Reinforcement Learning at the Edge: Analyzing Byzantine Agents in Discrete Action Spaces
Tekijät: Wenshuai Zhao, Jorge Peña Queralta, Li Qingqing, Tomi Westerlund
Toimittaja: Elhadi M. Shakshuki, Ansar Yasar
Konferenssin vakiintunut nimi: International Conference on Emerging Ubiquitous Systems and Pervasive Networks
Julkaisuvuosi: 2020
Journal: Procedia Computer Science
Kokoomateoksen nimi: The 11th International Conference on Emerging Ubiquitous Systems and Pervasive Networks (EUSPN 2020) / The 10th International Conference on Current and Future Trends of Information and Communication Technologies in Healthcare (ICTH 2020) / Affiliated Works
Sarjan nimi: Procedia Computer Science
Vuosikerta: 177
Aloitussivu: 324
Lopetussivu: 329
ISSN: 1877-0509
DOI: https://doi.org/10.1016/j.procs.2020.10.043
Rinnakkaistallenteen osoite: https://research.utu.fi/converis/portal/Publication/51395531
The integration of edge computing in next-generation mobile networks is bringing low-latency and high-bandwidth ubiquitous connectivity to a myriad of cyber-physical systems. This will further boost the increasing intelligence that is being embedded at the edge in various types of autonomous systems, where collaborative machine learning has the potential to play a significant role. This paper discusses some of the challenges in multi-agent distributed deep reinforcement learning that can occur in the presence of byzantine or malfunctioning agents. As the simulation-to-reality gap gets bridged, the probability of malfunctions or errors must be taken into account. We show how wrong discrete actions can significantly affect the collaborative learning effort. In particular, we analyze the effect of having a fraction of agents that might perform the wrong action with a given probability. We study the ability of the system to converge towards a common working policy through the collaborative learning process based on the number of experiences from each of the agents to be aggregated for each policy update, together with the fraction of wrong actions from agents experiencing malfunctions. Our experiments are carried out in a simulation environment using the Atari testbed for the discrete action spaces, and advantage actor-critic (A2C) for the distributed multi-agent training.
Ladattava julkaisu This is an electronic reprint of the original article. |