A4 Vertaisarvioitu artikkeli konferenssijulkaisussa

Sim-to-Real Transfer in Deep Reinforcement Learning for Robotics: A Survey




TekijätZhao W., Queralta J.P., Westerlund T.

Konferenssin vakiintunut nimiIEEE Symposium Series on Computational Intelligence

KustantajaInstitute of Electrical and Electronics Engineers Inc.

Julkaisuvuosi2020

Kokoomateoksen nimi2020 IEEE Symposium Series on Computational Intelligence (SSCI)

Tietokannassa oleva lehden nimi2020 IEEE Symposium Series on Computational Intelligence, SSCI 2020

Aloitussivu737

Lopetussivu744

ISBN978-1-7281-2548-0

eISBN978-1-7281-2547-3

DOIhttps://doi.org/10.1109/SSCI47803.2020.9308468

Rinnakkaistallenteen osoitehttps://arxiv.org/abs/2009.13303


Tiivistelmä
Deep} reinforcement learning has recently seen huge success across multiple areas in the robotics domain. Owing to the limitations of gathering real-world data, i.e., sample inefficiency and the cost of collecting it, simulation environments are utilized for training the different agents. This not only aids in providing a potentially infinite data source, but also alleviates safety concerns with real robots. Nonetheless, the gap between the simulated and real worlds degrades the performance of the policies once the models are transferred into real robots. Multiple research efforts are therefore now being directed towards closing this sim-toreal gap and accomplish more efficient policy transfer. Recent years have seen the emergence of multiple methods applicable to different domains, but there is a lack, to the best of our knowledge, of a comprehensive review summarizing and putting into context the different methods. In this survey paper, we cover the fundamental background behind sim-to-real transfer in deep reinforcement learning and overview the main methods being utilized at the moment: domain randomization, domain adaptation, imitation learning, meta-learning and knowledge distillation. We categorize some of the most relevant recent works, and outline the main application scenarios. Finally, we discuss the main opportunities and challenges of the different approaches and point to the most promising directions.



Last updated on 2024-26-11 at 23:08