Aggregating Actor-Critic Value Functions to Support Military Decision-Making




Vasankari, Lauri; Virtanen, Kai

Moosaei, Hossein; Kotsireas, Ilias; Pardalos, Panos M.

International Conference on the Dynamics of Information Systems

PublisherSpringer Nature Switzerland

2025

Lecture Notes in Computer Science

Dynamics of Information Systems: 7th International Conference, DIS 2024, Kalamata, Greece, June 2–7, 2024, Revised Selected Papers

14661

141

152

978-3-031-81009-1

978-3-031-81010-7

0302-9743

1611-3349

DOIhttps://doi.org/10.1007/978-3-031-81010-7_10

https://doi.org/10.1007/978-3-031-81010-7_10



Reinforcement learning (RL) is used for finding optimal policies for agents in respective environments. The obtained policies can be utilized in decision support, i.e. suggesting or determining optimal actions for different states or observations in the environment. An actor-critic RL method combines policy gradient methods with value functions, where the critic estimates the value function, and the actor updates the policy as directed by the critic. Usually, the utility is the policy learned by the actor. However, if the environment is defined accordingly, the approximated value function can be used to assess, e.g., an optimal solution for placing military units in an operational theatre. This paper explores the use of the critic as the primary output as a decision-support tool, presenting an experiment in a littoral warfare environment.



Last updated on 2025-16-10 at 08:05