A comparison of AUC estimators in small-sample studies




Airola A, Pahikkala T, Waegeman W, De Baets B, Salakoski T

Dzeroski Saso, Geurts Pierre, Rousu Juho

2010

JMLR workshop and conference proceedings

Proceedings of the third International Workshop on Machine Learning in Systems Biology

PROCEEDINGS OF THE THIRD INTERNATIONAL WORKSHOP ON MACHINE LEARNING IN SYSTEMS BIOLOGY

JMLR WORKSH CONF PRO

Proceedings of Machine Learning Research

8

3

13

11

1938-7288

http://jmlr.csail.mit.edu/proceedings/papers/v8/airola10a/airola10a.pdf



Reliable estimation of the classification performance of learned predictive models is difficult, when working in the small sample setting. When dealing with biological data it is often the case that separate test data cannot be afforded. Cross-validation is in this case a typical strategy for estimating the performance. Recent results, further supported by experimental evidence presented in this article, show that many standard approaches to cross-validation suffer from extensive bias or variance when the area under ROC curve (AUC) is used as performance measure. We advocate the use of leave-pair-out cross-validation (LPOCV) for performance estimation, as it avoids many of these problems. A method previously proposed by us can be used to efficiently calculate this estimate for regularized least-squares (RLS) based learners.



Last updated on 2024-26-11 at 12:15