A1 Refereed original research article in a scientific journal

Evaluating Students’ Open-ended Written Responses with LLMs: Using the RAG Framework for GPT-3.5, GPT-4, Claude-3, and Mistral-Large




AuthorsJauhiainen, Jussi; Garagorry Guerra, Agustín

PublisherShimur Publications

Publication year2024

JournalAdvances in Artificial Intelligence and Machine Learning

Journal name in sourceAdvances in Artificial Intelligence and Machine Learning

Volume4

Issue4

First page 3097

Last page3113

eISSN2582-9793

DOIhttps://doi.org/10.54364/AAIML.2024.44177

Web address https://doi.org/10.54364/aaiml.2024.44177

Self-archived copy’s web addresshttps://research.utu.fi/converis/portal/detail/Publication/477835432


Abstract

Evaluating open-ended written examination responses from students is an essential yet time-intensive task for educators, requiring a high degree of effort, consistency, and precision. Recent developments in Large Language Models (LLMs) present a promising opportunity to balance the need for thorough evaluation with efficient use of educators' time. We explore LLMs—GPT-3.5, GPT-4, Claude-3, and Mistral-Large—in assessing university students' open-ended responses to questions about reference material they have studied. Each model was instructed to evaluate 54 responses repeatedly under two conditions: 10 times (10-shot) with a temperature setting of 0.0 and 10 times with a temperature of 0.5, expecting a total of 1,080 evaluations per model and 4,320 evaluations across all models. The RAG (Retrieval Augmented Generation) framework was used to make the LLMs to process the evaluation. Notable variations existed in studied LLMs consistency and the grading outcomes. There is a need to comprehend strengths and weaknesses of using LLMs for educational assessments.


Downloadable publication

This is an electronic reprint of the original article.
This reprint may differ from the original in pagination and typographic detail. Please cite the original version.





Last updated on 2025-31-03 at 14:23