
Significance tests or confidence intervals: which are preferable for the comparison of classifiers?
Null hypothesis significance tests and their p-values currently dominate the statistical evaluation of classifiers in machine learning. Here, we discuss fundamental problems of this research practice. We focus on the problem of comparing multiple fully specified classifiers on
a small-sample test set. On the basis of the method by Quesenberry and Hurst, we derive confidence intervals for the effect size, i.e. the difference in true classification performance. These confidence intervals disentangle the effect size from its uncertainty and thereby provide information
beyond the p-value. This additional information can drastically change the way in which classification results are currently interpreted, published and acted upon. We illustrate how our reasoning can change, depending on whether we focus on p-values or confidence intervals. We
argue that the conclusions from comparative classification studies should be based primarily on effect size estimation with confidence intervals, and not on significance tests and p-values.
No Reference information available - sign in for access.
No Citation information available - sign in for access.
No Supplementary Data.
No Article Media
No Metrics
Keywords: classification; confidence interval; null hypothesis significance testing; p-value; reasoning
Document Type: Research Article
Affiliations: 1: Interdisciplinary Graduate School of Science and Engineering, Tokyo Institute of Technology, 4259 Nagatsuta Midori-ku, Yokohama 226-8502, Japan 2: Department of Computer Science and Artificial Intelligence, Intelligent Systems Group, University of the Basque Country UPV/EHU, Manuel de Lardizabal, 1 20018 Donostia–San Sebastián, Gipuzkoa, Spain
Publication date: June 1, 2013
- Editorial Board
- Information for Authors
- Subscribe to this Title
- Ingenta Connect is not responsible for the content or availability of external websites