Moral psychological exploration of the asymmetry effect in AI-assisted euthanasia decisions
: Laakasuo, Michael; Kunnari, Anton; Francis, Kathryn; Košová, Michaela Jirout; Kopecký, Robin; Buttazzoni, Paolo; Koverola, Mika; Palomäki, Jussi; Drosinou, Marianna; Hannikainen, Ivar
Publisher: Elsevier BV
: AMSTERDAM
: 2025
Cognition
: Cognition
: COGNITION
: 106177
: 262
: 20
: 0010-0277
: 1873-7838
DOI: https://doi.org/10.1016/j.cognition.2025.106177
: https://www.sciencedirect.com/science/article/pii/S0010027725001179?via%3Dihub
: https://research.utu.fi/converis/portal/detail/Publication/498626458
A recurring discrepancy in attitudes toward decisions made by human versus artificial agents, termed the Human-Robot moral judgment asymmetry, has been documented in moral psychology of AI. Across a wide range of contexts, AI agents are subject to greater moral scrutiny than humans for the same actions and decisions. In eight experiments (total N = 5837), we investigated whether the asymmetry effect arises in end-of-life care contexts and explored the mechanisms underlying this effect. Our studies documented reduced approval of an AI doctor's decision to withdraw life support relative to a human doctor (Studies 1a and 1b). This effect persisted regardless of whether the AI assumed a recommender role or made the final medical decision (Studies 2a and 2b and 3), but, importantly, disappeared under two conditions: when doctors kept on rather than withdraw life support (Studies 1a, 1b and 3), and when they carried out active euthanasia (e.g., providing a lethal injection or removing a respirator on the patient's demand) rather than passive euthanasia (Study 4). These findings highlight two contextual factors-the level of automation and the patient's autonomy-that influence the presence of the asymmetry effect, neither of which is not predicted by existing theories. Finally, we found that the asymmetry effect was partly explained by perceptions of AI incompetence (Study 5) and limited explainability (Study 6). As the role of AI in medicine continues to expand, our findings help to outline the conditions under which stakeholders disfavor AI over human doctors in clinical settings.
:
This research was funded by a Research Council of Finland grant (360123) awarded to Michael Laakasuo, who was the principal investigator and conceptualized the research. Marianna Drosinou was additionally funded by Tiina and Antti Herlin Foundation. This research is part of NetResilience consortium funded by the Strategic Research Council within the Academy of Finland (grant number 345186 and 345183). Additional data collection by Kathryn Francis and Ivar Hannikainen was funded by corresponding departmental grants.