A1 Refereed original research article in a scientific journal

Moral psychological exploration of the asymmetry effect in AI-assisted euthanasia decisions




AuthorsLaakasuo, Michael; Kunnari, Anton; Francis, Kathryn; Košová, Michaela Jirout; Kopecký, Robin; Buttazzoni, Paolo; Koverola, Mika; Palomäki, Jussi; Drosinou, Marianna; Hannikainen, Ivar

PublisherElsevier BV

Publishing placeAMSTERDAM

Publication year2025

Journal: Cognition

Journal name in sourceCognition

Journal acronymCOGNITION

Article number106177

Volume262

Number of pages20

ISSN0010-0277

eISSN1873-7838

DOIhttps://doi.org/10.1016/j.cognition.2025.106177

Publication's open availability at the time of reportingOpen Access

Publication channel's open availability Partially Open Access publication channel

Web address https://www.sciencedirect.com/science/article/pii/S0010027725001179?via%3Dihub

Self-archived copy’s web addresshttps://research.utu.fi/converis/portal/detail/Publication/498626458


Abstract
A recurring discrepancy in attitudes toward decisions made by human versus artificial agents, termed the Human-Robot moral judgment asymmetry, has been documented in moral psychology of AI. Across a wide range of contexts, AI agents are subject to greater moral scrutiny than humans for the same actions and decisions. In eight experiments (total N = 5837), we investigated whether the asymmetry effect arises in end-of-life care contexts and explored the mechanisms underlying this effect. Our studies documented reduced approval of an AI doctor's decision to withdraw life support relative to a human doctor (Studies 1a and 1b). This effect persisted regardless of whether the AI assumed a recommender role or made the final medical decision (Studies 2a and 2b and 3), but, importantly, disappeared under two conditions: when doctors kept on rather than withdraw life support (Studies 1a, 1b and 3), and when they carried out active euthanasia (e.g., providing a lethal injection or removing a respirator on the patient's demand) rather than passive euthanasia (Study 4). These findings highlight two contextual factors-the level of automation and the patient's autonomy-that influence the presence of the asymmetry effect, neither of which is not predicted by existing theories. Finally, we found that the asymmetry effect was partly explained by perceptions of AI incompetence (Study 5) and limited explainability (Study 6). As the role of AI in medicine continues to expand, our findings help to outline the conditions under which stakeholders disfavor AI over human doctors in clinical settings.

Downloadable publication

This is an electronic reprint of the original article.
This reprint may differ from the original in pagination and typographic detail. Please cite the original version.




Funding information in the publication
This research was funded by a Research Council of Finland grant (360123) awarded to Michael Laakasuo, who was the principal investigator and conceptualized the research. Marianna Drosinou was additionally funded by Tiina and Antti Herlin Foundation. This research is part of NetResilience consortium funded by the Strategic Research Council within the Academy of Finland (grant number 345186 and 345183). Additional data collection by Kathryn Francis and Ivar Hannikainen was funded by corresponding departmental grants.


Last updated on 22/12/2025 10:59:20 AM