A1 Vertaisarvioitu alkuperäisartikkeli tieteellisessä lehdessä 
Large language models (LLMs) as jurors: Assessing the potential of LLMs in legal contexts.
Tekijät: Sun, Yongjie; Zappalà, Angelo; Di Maso, Eleonora; Pompedda, Francesco; Nyman, Thomas J.; Santtila, Pekka
Kustantaja: American Psychological Association (APA)
Julkaisuvuosi: 2025
Lehti:Law and Human Behavior
ISSN: 0147-7307
eISSN: 1573-661X
DOI: https://doi.org/10.1037/lhb0000620
Verkko-osoite: https://doi.org/10.1037/lhb0000620
Objective
We explored the potential of large language models (LLMs) in legal decision making by replicating Fraser et al. (2023) mock jury experiment using LLMs (GPT-4o, Claude 3.5 Sonnet, and GPT-o1) as decision makers. We investigated LLMs’ reactions to factors that influenced human jurors, including defendant race, social status, number of allegations, and reporting delay in sexual assault cases.
Hypotheses
We hypothesized that LLMs would show higher consistency than humans, with no explicit but potential implicit biases. We also examined potential mediating factors (race-crime congruence, credibility, black sheep effect) and moderating effects (beliefs about traumatic memory, ease of reporting) explaining LLM decision making.
Method
Using a 2 × 2 × 2 × 3 factorial design, we manipulated defendant race (Black/White), social status (low/high), number of allegations (one/five), and reporting delay (5/20/35 years), collecting 2,304 responses across conditions. LLMs were prompted to act as jurors, providing probability of guilt assessments (0–100), dichotomous verdicts, and responses to mediator and moderator variables.
Results
LLMs showed higher average probability of guilt assessments compared with humans (63.56 vs. 58.82) but were more conservative in rendering guilty verdicts (21% vs. 49%). Similar to humans, LLMs demonstrated bias against White defendants and increased guilt attributions with multiple allegations. Unlike humans, who showed minimal effects of reporting delay, LLMs assigned higher guilt probabilities to cases with shorter reporting delays. Mediation analyses revealed that race-crime stereotype congruency and the black sheep effect partially mediated the racial bias effect, whereas perceived memory strength mediated the reporting delay effect.
Conclusions
Although LLMs may offer more consistent decision making, they are not immune to biases and may interpret certain case factors differently from human jurors.
Julkaisussa olevat rahoitustiedot: 
This work was supported by New York Uniersity Shanghai (Grant 10109_Pekka Santtila Start Up Fund_RI awarded to Pekka Santtila) and Spring Crocus s.r.l., managing body of CBT.ACADEMY awarded to Angelo Zappalà