A4 Refereed article in a conference publication

Towards Lifelong Federated Learning in Autonomous Mobile Robots with Continuous Sim-to-Real Transfer




AuthorsYu Xianjia, Peña Queralta Jorge, Westerlund Tomi

EditorsElhadi Shakshuki

Conference nameInternational Conference on Emerging Ubiquitous Systems and Pervasive Networks

Publication year2022

JournalProcedia Computer Science

Book title The 13th International Conference on Emerging Ubiquitous Systems and Pervasive Networks (EUSPN) / The 12th International Conference on Current and Future Trends of Information and Communication Technologies in Healthcare (ICTH-2022) / Affiliated Workshops

Series titleProcedia Computer Science

Volume210

First page 86

Last page93

eISSN1877-0509

DOIhttps://doi.org/10.1016/j.procs.2022.10.123

Web address https://doi.org/10.1016/j.procs.2022.10.123

Self-archived copy’s web addresshttps://research.utu.fi/converis/portal/detail/Publication/177096412


Abstract

The role of deep learning (DL) in robotics has significantly deepened over the last decade. Intelligent robotic systems today are highly connected systems that rely on DL for a variety of perception, control and other tasks. At the same time, autonomous robots are being increasingly deployed as part of fleets, with collaboration among robots becoming a more relevant factor. From the perspective of collaborative learning, federated learning (FL) enables continuous training of models in a distributed, privacy-preserving way. This paper focuses on vision-based obstacle avoidance for mobile robot navigation. On this basis, we explore the potential of FL for distributed systems of mobile robots enabling continuous learning via the engagement of robots in both simulated and real-world scenarios. We extend previous works by studying the performance of different image classifiers for FL, compared to centralized, cloud-based learning with a priori aggregated data. We also introduce an approach to continuous learning from mobile robots with extended sensor suites able to provide automatically labelled data while they are completing other tasks. We show that higher accuracies can be achieved by training the models in both simulation and reality, enabling continuous updates to deployed models.


Downloadable publication

This is an electronic reprint of the original article.
This reprint may differ from the original in pagination and typographic detail. Please cite the original version.





Last updated on 2024-26-11 at 12:36