Generative AI in assessing written responses of geography exams: challenges and potential




Jauhiainen, Jussi S.; Gagagorry Guerra, Agustín; Nylén, Tua; Mäki, Sanna

PublisherInforma UK Limited

2025

 Journal of Geography in Higher Education

0309-8265

1466-1845

DOIhttps://doi.org/10.1080/03098265.2025.2593484

https://doi.org/10.1080/03098265.2025.2593484

https://research.utu.fi/converis/portal/detail/Publication/505817244



This article examines the application of Large Language Models (LLM) – GPT-4, Claude, Cohere, and Llama – to assess students’ open-ended responses in Geography exams. The models’ assessment scores were compared to assessment and scores by the original multi-stage human assessment as well as two additional human expert scoring. The case study considers the high-stakes national matriculation exam in Finland. The exam results play a crucial role in determining individuals’ eligibility for higher education, including a study right in Geography at the university. We selected 18 essays that had originally been given 5 (basic), 10 (good) and 15 (excellent) points on a scale from 0 to 15 points. Findings show variability between LLMs and notable differences between LLM and human evaluations. The language of responses and grading instruction influenced LLM performance. These results highlight the potential and complexities of integrating generative AI today in learning assessments to score open-ended responses. Precise control of prompts and LLM settings proved crucial for the LLM to align with original assessment scores more closely.


Last updated on 09/12/2025 08:46:24 AM