A4 Refereed article in a conference publication
Comparing Deterministic and Stochastic Reinforcement Learning for Glucose Regulation in Type 1 Diabetes
Authors: Timms, David; Hettiarachchi, Chirath; Suominen, Hanna
Editors: Househ, Mowafa S.; Tariq, Zain Ul Abideen; Al-Zubaidi, Mahmood; Shah, Uzair; Huesing, Elaine
Conference name: World Congress on Medical and Health Informatics
Publisher: IOS Press
Publication year: 2025
Journal: Studies in Health Technology and Informatics
Book title : MEDINFO 2025 — Healthcare Smart × Medicine Deep: Proceedings of the 20th World Congress on Medical and Health Informatics
Journal name in source: Studies in health technology and informatics
Volume: 329
First page : 1039
Last page: 1043
eISBN: 978-1-64368-608-0
ISSN: 0926-9630
eISSN: 1879-8365
DOI: https://doi.org/10.3233/SHTI250997
Web address : https://doi.org/10.3233/shti250997
Self-archived copy’s web address: https://research.utu.fi/converis/portal/detail/499745855
Type 1 Diabetes (T1D) is a chronic condition affecting millions worldwide, requiring external insulin administration to regulate blood glucose levels and prevent serious complications. Artificial Pancreas Systems (APS) for managing T1D currently rely on manual input, which adds a cognitive burden on people with T1D and their carers. Research into alleviating this burden through Reinforcement Learning (RL) explores enabling the APS to autonomously learn and adapt to the complex dynamics of blood glucose regulation, demonstrating improvements in in-silico evaluations compared to traditional clinical approaches. This evaluation study compared the primary polarities of RL for glucose regulation, namely, stochastic (e.g., Proximal Policy Optimization (PPO) and deterministic (e.g., Twin Delayed Deep Deterministic Policy Gradient (TD3) algorithms in-silico using quantitative and qualitative methods, patient specific clinical metrics, and the adult and adolescent cohorts of the U.S. Food and Drug Administration approved UVA/PADOVA 2008 model. Although the behavior of TD3 was easier to interpret, it did not typically outperform PPO, thereby challenging assessing their safety and suitability. This conclusion highlights the importance of improving RL algorithms in APS applications for both interpretability and predictive performance in future research.
Downloadable publication This is an electronic reprint of the original article. |
Funding information in the publication:
We gratefully acknowledge funding from the MRFF 2022 National Critical Research Infrastructure (MRFCRI000138, Developing a new digital therapeutic or depression: Closed loop non-invasive brain stimulation). This work was supported by computational resources provided by the Australian Government through the National Computational Infrastructure under the ANU Merit Allocation Scheme (ny83 and eu59) and ANU Startup Scheme (sj53).