A4 Refereed article in a conference publication
Large Language Model Performance in Automatic Assessment on an Introductory Programming Course
Authors: Rytilahti, Juuso; Kaila, Erkki; Ingman, Valtteri
Editors: Tatti, Nikolaj; Kasurinen, Jussi; Päivärinta, Tero
Conference name: Annual Doctoral Symposium of Computer Science
Publication year: 2026
Journal: CEUR Workshop Proceedings
Book title : Proceedings of the Annual Doctoral Symposium of Computer Science 2025 (TKTP 2025), Helsinki, Finland, June, 2025
Article number: paper02
Volume: 4181
eISSN: 1613-0073
Publication's open availability at the time of reporting: Open Access
Publication channel's open availability : Open Access publication channel
Web address : https://ceur-ws.org/Vol-4181/paper02.pdf
Self-archived copy’s web address: https://research.utu.fi/converis/portal/detail/Publication/508258622
Self-archived copy's licence: CC BY
Self-archived copy's version: Publisher`s PDF
Large language models (LLMs) are a potential solution for solving the significant assessment load in big courses with numerous assignments. However, the quality of the automated assessment may not match the evaluation by teachers or other experts. In this paper, we examine the automated assessment of programming-related tasks in a large-scale introductory programming course. The study is structured into two parts: first, we examine how reliably LLMs can assess the tasks compared to course teachers when provided the same rubric. Second, we try to find out if a simple autonomous agent pipeline, mimicking a review board, can improve the assessment outcome. The study was conducted on a university-level introductory Python course with more than 500 students. We chose a total of four programming-related assignments from the four final weeks of the course. First, we provided the selected LLM models with the student answers accompanied by an evaluation rubric and a simple prompt and recorded the resulting scores and feedback comments. Second, we built a pipeline of autonomous agents with different roles of a review board and used the student submissions as input for the pipeline, again recording the scores and comments. In the article, we discuss the feasibility and the performance of the given approaches. We also provide a detailed analysis of the comparison between the results of the two approaches and the teacher-assessed results, and discuss the differences in the results and the likely reasons for them. Finally, we outline the potential for future work.
Downloadable publication This is an electronic reprint of the original article. |
Funding information in the publication:
This work has been supported by FAST, the Finnish Software Engineering Doctoral Research Network, funded by the Ministry of Education and Culture, Finland.