A4 Refereed article in a conference publication
How to run a world record? A Reinforcement Learning approach
Authors: Shahsavari Sajad, Immonen Eero, Karami Masoomeh, Haghbayan Mohammadhashem, Plosila Juha
Editors: Ibrahim A. Hameed, Agus Hasan, Saleh Abdel-Afou Alaliyat
Conference name: European Conference on Modelling and Simulation
Publisher: European Council for Modelling and Simulation
Publication year: 2022
Journal: Proceedings: European Conference for Modelling and Simulation
Book title : Proceedings of the 36th ECMS International Conference on Modelling and Simulation ECMS 2022 May 30th – June 3rd, 2022, Ålesund, Norway
Journal name in source: Proceedings - European Council for Modelling and Simulation, ECMS
Series title: Proceedings : European Conference for Modelling and Simulation
Number in series: 36
Volume: 1
First page : 159
Last page: 166
ISBN: 978-3-937436-77-7
ISSN: 2522-2414
eISSN: 2522-2422
Web address : https://www.scs-europe.net/dlib/2022/2022-0159.html
Self-archived copy’s web address: https://research.utu.fi/converis/portal/detail/Publication/175657877
Finding the optimal distribution of exerted effort by an athlete in competitive sports has been widely investigated in the fields of sport science, applied mathematics and optimal control. In this article, we propose a reinforcement learning-based solution to the optimal control problem in the running race application. Well-known mathematical model of Keller is used for numerically simulating the dynamics in runner's energy storage and motion. A feed-forward neural network is employed as the probabilistic controller model in continuous action space which transforms the current state (position, velocity and available energy) of the runner to the predicted optimal propulsive force that the runner should apply in the next time step. A logarithmic barrier reward function is designed to evaluate performance of simulated races as a continuous smooth function of runner's position and time. The neural network parameters, then, are identified by maximizing the expected reward using on-policy actor-critic policy-gradient RL algorithm. We trained the controller model for three race lengths: 400, 1500 and 10000 meters and found the force and velocity profiles that produce a near-optimal solution for the runner's problem. Results conform with Keller's theoretical findings with relative percent error of 0.59% and are comparable to real world records with relative percent error of 2.38%, while the same error for Keller's findings is 2.82%.
Downloadable publication This is an electronic reprint of the original article. |