The widespread misinterpretation of p-values as error probabilities
The anonymous mixing of Fisherian (p-values) and Neyman–Pearsonian (α levels) ideas about testing, distilled in the customary but misleading p < α criterion of statistical significance, has led researchers in the social and management sciences (and elsewhere) to commonly misinterpret the p-value as a ‘data-adjusted’ Type I error rate. Evidence substantiating this claim is provided from a number of fronts, including comments by statisticians, articles judging the value of significance testing, textbooks, surveys of scholars, and the statistical reporting behaviours of applied researchers. That many investigators do not know the difference between p’s and α’s indicates much bewilderment over what those most ardently sought research outcomes—statistically significant results—means. Statisticians can play a leading role in clearing this confusion. A good starting point would be to abolish the p < α criterion of statistical significance.
No Reference information available - sign in for access.
No Citation information available - sign in for access.
No Supplementary Data.
No Article Media
Document Type: Research Article
Affiliations: Thomas F. Sheehan Distinguished Professor of Marketing,College of Business and Public Administration, Drake University, Des MoinesIA50311, USA
Publication date: November 1, 2011