A1 Vertaisarvioitu alkuperäisartikkeli tieteellisessä lehdessä

Knowledge Tracing Models in Educational Data Mining: Historical Evolution, Categorization, and Empirical Evaluation




TekijätDas Adhikary, Prince; Metsämuuronen, Jari; Laakso, Mikko-Jussi; Heikkonen, Jukka

Julkaisuvuosi2026

Lehti: IEEE Access

Vuosikerta14

Aloitussivu49582

Lopetussivu49606

eISSN2169-3536

DOIhttps://doi.org/10.1109/ACCESS.2026.3678846

Julkaisun avoimuus kirjaamishetkelläAvoimesti saatavilla

Julkaisukanavan avoimuus Kokonaan avoin julkaisukanava

Verkko-osoitehttps://ieeexplore.ieee.org/document/11457580

Rinnakkaistallenteen osoitehttps://research.utu.fi/converis/portal/detail/Publication/523106374

Rinnakkaistallenteen lisenssiCC BY

Rinnakkaistallennetun julkaisun versioKustantajan versio


Tiivistelmä

This article analyses computational models of Knowledge Tracing (KT), which address the complex sequence-modelling task of predicting dynamic, unobservable latent user states from historical interaction logs. First, we propose a comprehensive taxonomy identifying nine distinct and interconnected KT model families: psychometric; Bayesian; machine learning; deep learning; graph-based; temporal/sequential; multi-task; contrastive/self-supervised; and domain-adaptive. Secondly, we trace the historical evolution of KT architectures, from the foundational psychometric methods of the 1950s to the modern integration of attention mechanisms and graph neural networks. Thirdly, we systematically evaluate nine lightweight representative computational models—one from each category—across two large-scale datasets: ASSISTments 09-10 and DigiArvi 2025. We measure predictive calibration using accuracy, F1 score, ROC-AUC, average precision, and log loss under a strict computational time budget. Finally, our rigorous empirical analysis demonstrates that multi-task and temporal/sequential architectures yield the highest performance. Specifically, Fine-Grained Knowledge Tracing (FKT) achieved the best results on the DigiArvi dataset (accuracy: 0.77; F1 score: 0.85), while Temporal Item Response Theory (TIRT) performed best on the ASSISTments dataset (accuracy: 0.70; F1 score: 0.75). Traditional baselines, such as Logistic Regression (LR), remain highly competitive. Consequently, we advocate a shift towards ‘Green AI’ and standardized benchmarking to address the field’s fragmented evaluation standards, as we identify diminishing returns from increasing model complexity. Future research must leverage generative Artificial Intelligence (AI) and causal inference to move beyond simple prediction toward agentic AI systems capable of active pedagogical intervention.


Ladattava julkaisu

This is an electronic reprint of the original article.
This reprint may differ from the original in pagination and typographic detail. Please cite the original version.





Last updated on