A4 Refereed article in a conference publication

Automatic Short Answer Grading for Finnish with ChatGPT




AuthorsChang Li-Hsin, Ginter Filip

EditorsWooldridge Michael, Dy Jennifer, Natarajan Sriraam

Conference nameAAAI Conference on Artificial Intelligence

PublisherAssociation for the Advancement of Artificial Intelligence

Publishing placeWashington, DC

Publication year2024

JournalProceedings of the AAAI Conference on Artificial Intelligence

Book title Proceedings of the 38th AAAI Conference on Artificial Intelligence

Journal name in sourceProceedings of the AAAI Conference on Artificial Intelligence

Series titleProceedings of the AAAI Conference on Artificial Intelligence

Number in series21

Volume38

First page 23173

Last page23181

ISBN978-1-57735-887-9

ISSN2159-5399

eISSN 2374-3468

DOIhttps://doi.org/10.1609/aaai.v38i21.30363(external)

Web address https://doi.org/10.1609/aaai.v38i21.30363(external)


Abstract
Automatic short answer grading (ASAG) seeks to mitigate the burden on teachers by leveraging computational methods to evaluate student-constructed text responses. Large language models (LLMs) have recently gained prominence across diverse applications, with educational contexts being no exception. The sudden rise of ChatGPT has raised expectations that LLMs can handle numerous tasks, including ASAG. This paper aims to shed some light on this expectation by evaluating two LLM-based chatbots, namely ChatGPT built on GPT-3.5 and GPT-4, on scoring short-question answers under zero-shot and one-shot settings. Our data consists of 2000 student answers in Finnish from ten undergraduate courses. Multiple perspectives are taken into account during this assessment, encompassing those of grading system developers, teachers, and students. On our dataset, GPT-4 achieves a good QWK score (0.6+) in 44% of one-shot settings, clearly outperforming GPT-3.5 at 21%. We observe a negative association between student answer length and model performance, as well as a correlation between a smaller standard deviation among a set of predictions and lower performance. We conclude that while GPT-4 exhibits signs of being a capable grader, additional research is essential before considering its deployment as a reliable autograder.



Last updated on 2025-05-02 at 15:10