A4 Refereed article in a conference publication

Aggregating Actor-Critic Value Functions to Support Military Decision-Making




AuthorsVasankari, Lauri; Virtanen, Kai

EditorsMoosaei, Hossein; Kotsireas, Ilias; Pardalos, Panos M.

Conference nameInternational Conference on the Dynamics of Information Systems

PublisherSpringer Nature Switzerland

Publication year2025

Journal:Lecture Notes in Computer Science

Book title Dynamics of Information Systems: 7th International Conference, DIS 2024, Kalamata, Greece, June 2–7, 2024, Revised Selected Papers

Volume14661

First page 141

Last page152

ISBN978-3-031-81009-1

eISBN978-3-031-81010-7

ISSN0302-9743

eISSN1611-3349

DOIhttps://doi.org/10.1007/978-3-031-81010-7_10

Web address https://doi.org/10.1007/978-3-031-81010-7_10


Abstract

Reinforcement learning (RL) is used for finding optimal policies for agents in respective environments. The obtained policies can be utilized in decision support, i.e. suggesting or determining optimal actions for different states or observations in the environment. An actor-critic RL method combines policy gradient methods with value functions, where the critic estimates the value function, and the actor updates the policy as directed by the critic. Usually, the utility is the policy learned by the actor. However, if the environment is defined accordingly, the approximated value function can be used to assess, e.g., an optimal solution for placing military units in an operational theatre. This paper explores the use of the critic as the primary output as a decision-support tool, presenting an experiment in a littoral warfare environment.



Last updated on 2025-16-10 at 08:05