A4 Refereed article in a conference publication
Aggregating Actor-Critic Value Functions to Support Military Decision-Making
Authors: Vasankari, Lauri; Virtanen, Kai
Editors: Moosaei, Hossein; Kotsireas, Ilias; Pardalos, Panos M.
Conference name: International Conference on the Dynamics of Information Systems
Publisher: Springer Nature Switzerland
Publication year: 2025
Journal:: Lecture Notes in Computer Science
Book title : Dynamics of Information Systems: 7th International Conference, DIS 2024, Kalamata, Greece, June 2–7, 2024, Revised Selected Papers
Volume: 14661
First page : 141
Last page: 152
ISBN: 978-3-031-81009-1
eISBN: 978-3-031-81010-7
ISSN: 0302-9743
eISSN: 1611-3349
DOI: https://doi.org/10.1007/978-3-031-81010-7_10
Web address : https://doi.org/10.1007/978-3-031-81010-7_10
Reinforcement learning (RL) is used for finding optimal policies for agents in respective environments. The obtained policies can be utilized in decision support, i.e. suggesting or determining optimal actions for different states or observations in the environment. An actor-critic RL method combines policy gradient methods with value functions, where the critic estimates the value function, and the actor updates the policy as directed by the critic. Usually, the utility is the policy learned by the actor. However, if the environment is defined accordingly, the approximated value function can be used to assess, e.g., an optimal solution for placing military units in an operational theatre. This paper explores the use of the critic as the primary output as a decision-support tool, presenting an experiment in a littoral warfare environment.