A1 Refereed original research article in a scientific journal

Integrating Large Language Models Into Trauma Education for Medical Students: Randomized Controlled Pilot Trial




AuthorsGustafsson, Joona; Lehtonen-Smeds, Erno; Pakkasjärvi, Niklas

Publication year2026

Journal: JMIR Medical Education

Article numbere79134

Volume12

eISSN2369-3762

DOIhttps://doi.org/10.2196/79134

Publication's open availability at the time of reportingOpen Access

Publication channel's open availability Open Access publication channel

Web address https://doi.org/10.2196/79134

Self-archived copy’s web addresshttps://research.utu.fi/converis/portal/detail/Publication/522836720

Self-archived copy's licenceCC BY

Self-archived copy's versionPublisher`s PDF


Abstract

Background:

The exponential growth of medical knowledge presents a paradox for modern medical education. While access to information is immediate, applying it in a clinically meaningful way remains a challenge. Large language models (LLMs), such as ChatGPT, are widely used for information retrieval, yet their role in dynamic, high-pressure clinical learning remains poorly understood.

Objective:

This study aims to evaluate whether unstructured access to an LLM improves decision-making, teamwork, and confidence in trauma education for medical students.

Methods:

This randomized controlled pilot study involved 41 final-year medical students participating in a trauma simulation session. Students self-selected into teams of 4 to 6 and were randomized to either an LLM-assisted group (ChatGPT-4o mini) or a control group without LLM access. All teams completed 18 video-based trauma scenarios requiring time-sensitive clinical decisions. Prompting was unrestricted. Confidence and trauma exposure were assessed using pre- or postquestionnaires. Facilitators rated teamwork (1-5), decision accuracy, and response times. Knowledge retention was measured 4 weeks later via an online quiz.

Results:

Confidence in trauma management improved in both groups (P<.001), with larger gains in the non-LLM group (P=.02). LLM support did not enhance the decision accuracy or speed and was associated with longer response times in some complex cases. Teams without LLMs demonstrated more active discussion and scored higher in teamwork ratings (median 5.0 [IQR 5.0-5.0] vs median 3.5 [IQR 3.0-4.5]; P=.08). Students primarily used the LLM for fact-checking but reported vague or overly general responses. Knowledge retention was high across both groups and did not differ significantly (P=.33).

Conclusions:

While students appreciated the inclusion of artificial intelligence (AI), unstructured LLM use did not improve performance and may have disrupted the group reasoning. The use of non-English prompting likely contributed to lower AI performance, underscoring the importance of language alignment in LLM applications. This pilot study highlights the need for structured AI integration and targeted instruction in AI literacy. Simulation-based trauma education proved effective and well received, but optimizing the educational value of LLMs will require thoughtful curricular design. Further studies with more students are needed to define best practices for LLM use in clinical education.


Downloadable publication

This is an electronic reprint of the original article.
This reprint may differ from the original in pagination and typographic detail. Please cite the original version.




Funding information in the publication
The authors declared no financial support was received for this work.


Last updated on 14/04/2026 10:47:53 AM