A1 Refereed original research article in a scientific journal
Automated detection of algorithm debt in deep learning frameworks: an empirical study
Authors: Simon, Emmanuel Iko-Ojo; Hettiarachchi, Chirath; Potanin, Alex; Suominen, Hanna; Fard, Fatemeh
Publisher: Springer Nature
Publication year: 2026
Journal: Empirical Software Engineering
Article number: 66
Volume: 31
Issue: 3
ISSN: 1382-3256
eISSN: 1573-7616
DOI: https://doi.org/10.1007/s10664-026-10807-5
Publication's open availability at the time of reporting: Open Access
Publication channel's open availability : Partially Open Access publication channel
Web address : https://doi.org/10.1007/s10664-026-10807-5
Self-archived copy’s web address: https://research.utu.fi/converis/portal/detail/Publication/515814945
Self-archived copy's licence: CC BY
Self-archived copy's version: Publisher`s PDF
Expedient design choices in software development can lead to Technical Debt (TD), with development teams documenting such decisions as Self-Admitted TD (SATD). Algorithm Debt (AD) is a type of TD resulting from the suboptimal implementation of algorithms, which impacts system performance. Given the impact of AD, its automated detection is crucial in Deep Learning (DL) frameworks due to their complexity and evolution. Early detection of AD in DL frameworks can help mitigate model degradation and scalability issues. Despite previous studies on the automated detection of TD from SATD using Machine Learning (ML)/DL models, research on AD detection in DL frameworks remains underexplored. In this study, we empirically investigated the performance of ML/DL models for the automated detection of AD using a dataset of 38, 881 SATD comments from seven DL frameworks. We trained, evaluated, and tested ML/DL models, used embeddings from both DL and large language models, and explored an approach to enrich the dataset with handcrafted features based on AD-related keywords. Our findings reveal that AD is frequently misclassified as Design or Implementation Debt. Logistic Regression (an ML model) with Custom AD Features, achieved an F1-score of 54% for AD, outperforming other ML/DL models (42% to 52%), highlighting the importance of tailored feature engineering. Our research advances automated AD detection in DL frameworks by providing insights into the strengths and limitations of ML/DL models, serving as a first step to guide future tool development. This could help developers using DL frameworks to identify AD issues during development, thereby enhancing system reliability by mitigating model degradation and scalability challenges.
Downloadable publication This is an electronic reprint of the original article. |
Funding information in the publication:
Open Access funding enabled and organized by CAUL and its Member Institutions. This work is supported by the Australian National University (ANU) through the ANU PhD scholarship within the ANU Research School of Computing.