A1 Vertaisarvioitu alkuperäisartikkeli tieteellisessä lehdessä
Knowledge Tracing Models in Educational Data Mining: Historical Evolution, Categorization, and Empirical Evaluation
Tekijät: Das Adhikary, Prince; Metsämuuronen, Jari; Laakso, Mikko-Jussi; Heikkonen, Jukka
Julkaisuvuosi: 2026
Lehti: IEEE Access
Vuosikerta: 14
Aloitussivu: 49582
Lopetussivu: 49606
eISSN: 2169-3536
DOI: https://doi.org/10.1109/ACCESS.2026.3678846
Julkaisun avoimuus kirjaamishetkellä: Avoimesti saatavilla
Julkaisukanavan avoimuus : Kokonaan avoin julkaisukanava
Verkko-osoite: https://ieeexplore.ieee.org/document/11457580
Rinnakkaistallenteen osoite: https://research.utu.fi/converis/portal/detail/Publication/523106374
Rinnakkaistallenteen lisenssi: CC BY
Rinnakkaistallennetun julkaisun versio: Kustantajan versio
This article analyses computational models of Knowledge Tracing (KT), which address the complex sequence-modelling task of predicting dynamic, unobservable latent user states from historical interaction logs. First, we propose a comprehensive taxonomy identifying nine distinct and interconnected KT model families: psychometric; Bayesian; machine learning; deep learning; graph-based; temporal/sequential; multi-task; contrastive/self-supervised; and domain-adaptive. Secondly, we trace the historical evolution of KT architectures, from the foundational psychometric methods of the 1950s to the modern integration of attention mechanisms and graph neural networks. Thirdly, we systematically evaluate nine lightweight representative computational models—one from each category—across two large-scale datasets: ASSISTments 09-10 and DigiArvi 2025. We measure predictive calibration using accuracy, F1 score, ROC-AUC, average precision, and log loss under a strict computational time budget. Finally, our rigorous empirical analysis demonstrates that multi-task and temporal/sequential architectures yield the highest performance. Specifically, Fine-Grained Knowledge Tracing (FKT) achieved the best results on the DigiArvi dataset (accuracy: 0.77; F1 score: 0.85), while Temporal Item Response Theory (TIRT) performed best on the ASSISTments dataset (accuracy: 0.70; F1 score: 0.75). Traditional baselines, such as Logistic Regression (LR), remain highly competitive. Consequently, we advocate a shift towards ‘Green AI’ and standardized benchmarking to address the field’s fragmented evaluation standards, as we identify diminishing returns from increasing model complexity. Future research must leverage generative Artificial Intelligence (AI) and causal inference to move beyond simple prediction toward agentic AI systems capable of active pedagogical intervention.
Ladattava julkaisu This is an electronic reprint of the original article. |