A4 Refereed article in a conference publication

Can GPT-4 Enhance Teaching? A Pilot Study on AI-Driven Analysis of Student Course Feedback




AuthorsWeerakoon, Oshani; Puhtila, Panu; Mäkilä, Tuomas; Kaila, Erkki

EditorsTatti, Nikolaj; Kasurinen, Jussi; Päivärinta, Tero

Conference nameAnnual Doctoral Symposium of Computer Science

Publication year2026

Journal: CEUR Workshop Proceedings

Book title Proceedings of the Annual Doctoral Symposium of Computer Science 2025 (TKTP 2025), Helsinki, Finland, June, 2025

Article numberpaper01

Volume4181

eISSN1613-0073

Publication's open availability at the time of reportingOpen Access

Publication channel's open availability Open Access publication channel

Web address https://ceur-ws.org/Vol-4181/paper01.pdf

Self-archived copy’s web addresshttps://research.utu.fi/converis/portal/detail/Publication/508257582

Self-archived copy's licenceCC BY

Self-archived copy's versionPublisher`s PDF


Abstract

In this pilot study, we explored the use of generative AI—specifically GPT-4—to evaluate student feedback in a bilingual software engineering course offered at the University of Turku. Our aim was twofold: to examine whether ChatGPT can meaningfully evaluate student course feedback and propose suitable enhancements, and to compare its evaluations with those made by a course teacher. We collected voluntary feedback from 18 consenting students across three course instances in 2023 and 2024, resulting in a total of 390 feedback entries. These responses were first translated into English and then anonymized. Using structured questionnaires aligned with defined pedagogical goals, we then analyzed the responses through a dual evaluation process: (1) AI-based assessment using a custom JavaScript application integrating GPT-4 and GPT-4o-mini, and (2) manual evaluation by the teacher. Both followed a standardized Likert-scale format with brief textual comments, and all evaluations were consolidated into thirty-six manually maintained recording sheets. Evaluation results were visualized using heat maps across five key themes derived from the pedagogical goals. Our comparative analysis showed general alignment between the two evaluators, with key differences in the perceived content clarity and video quality of the course. We further extended our discussion to examine GPT’s applicability and limitations as a feedback evaluator. In particular, we identified its potential to quickly assess structured student feedback in courses with high participation, where manual evaluation may be time-consuming for course teachers. These findings collectively provide insights into using generative AI in course feedback analysis to enhance teaching within software engineering curricula.


Downloadable publication

This is an electronic reprint of the original article.
This reprint may differ from the original in pagination and typographic detail. Please cite the original version.




Funding information in the publication
This work has been supported by FAST, the Finnish Software Engineering Doctoral Research Network, funded by the Ministry of Education and Culture, Finland.


Last updated on 16/03/2026 01:47:51 PM