A4 Vertaisarvioitu artikkeli konferenssijulkaisussa

Automatic Short Answer Grading for Finnish with ChatGPT




TekijätChang Li-Hsin, Ginter Filip

ToimittajaWooldridge Michael, Dy Jennifer, Natarajan Sriraam

Konferenssin vakiintunut nimiAAAI Conference on Artificial Intelligence

KustantajaAssociation for the Advancement of Artificial Intelligence

KustannuspaikkaWashington, DC

Julkaisuvuosi2024

JournalProceedings of the AAAI Conference on Artificial Intelligence

Kokoomateoksen nimiProceedings of the 38th AAAI Conference on Artificial Intelligence

Tietokannassa oleva lehden nimiProceedings of the AAAI Conference on Artificial Intelligence

Sarjan nimiProceedings of the AAAI Conference on Artificial Intelligence

Numero sarjassa21

Vuosikerta38

Aloitussivu23173

Lopetussivu23181

ISBN978-1-57735-887-9

ISSN2159-5399

eISSN 2374-3468

DOIhttps://doi.org/10.1609/aaai.v38i21.30363

Verkko-osoitehttps://doi.org/10.1609/aaai.v38i21.30363


Tiivistelmä
Automatic short answer grading (ASAG) seeks to mitigate the burden on teachers by leveraging computational methods to evaluate student-constructed text responses. Large language models (LLMs) have recently gained prominence across diverse applications, with educational contexts being no exception. The sudden rise of ChatGPT has raised expectations that LLMs can handle numerous tasks, including ASAG. This paper aims to shed some light on this expectation by evaluating two LLM-based chatbots, namely ChatGPT built on GPT-3.5 and GPT-4, on scoring short-question answers under zero-shot and one-shot settings. Our data consists of 2000 student answers in Finnish from ten undergraduate courses. Multiple perspectives are taken into account during this assessment, encompassing those of grading system developers, teachers, and students. On our dataset, GPT-4 achieves a good QWK score (0.6+) in 44% of one-shot settings, clearly outperforming GPT-3.5 at 21%. We observe a negative association between student answer length and model performance, as well as a correlation between a smaller standard deviation among a set of predictions and lower performance. We conclude that while GPT-4 exhibits signs of being a capable grader, additional research is essential before considering its deployment as a reliable autograder.



Last updated on 2025-05-02 at 15:10