A1 Refereed original research article in a scientific journal

Evaluation metrics and statistical tests for machine learning




AuthorsRainio, Oona; Teuho, Jarmo; Klén, Riku

PublisherNature Research

Publication year2024

JournalScientific Reports

Journal name in sourceScientific Reports

Article number6086

Volume14

DOIhttps://doi.org/10.1038/s41598-024-56706-x

Web address https://doi.org/10.1038/s41598-024-56706-x

Self-archived copy’s web addresshttps://research.utu.fi/converis/portal/detail/Publication/387398800

Additional informationAuthor correction to this article: https://www.nature.com/articles/s41598-024-66611-y ; DOI: 10.1038/s41598-024-66611-y


Abstract
Research on different machine learning (ML) has become incredibly popular during the past few decades. However, for some researchers not familiar with statistics, it might be difficult to understand how to evaluate the performance of ML models and compare them with each other. Here, we introduce the most common evaluation metrics used for the typical supervised ML tasks including binary, multi-class, and multi-label classification, regression, image segmentation, object detection, and information retrieval. We explain how to choose a suitable statistical test for comparing models, how to obtain enough values of the metric for testing, and how to perform the test and interpret its results. We also present a few practical examples about comparing convolutional neural networks used to classify X-rays with different lung infections and detect cancer tumors in positron emission tomography images.

Downloadable publication

This is an electronic reprint of the original article.
This reprint may differ from the original in pagination and typographic detail. Please cite the original version.





Last updated on 2024-13-12 at 11:38