The ASA’s Statement on p-Values: Context, Process, and Purpose

american statistical association
misinterpretation
p-values
scientific reform
statistical inference
statistical methods
  • Core Principle: This article presents the American Statistical Association’s (ASA) six core principles for the proper use of the P-value, aiming to correct pervasive misinterpretations in scientific research.
  • Key Principles: The ASA states that the P-value does not measure the probability that a hypothesis is true (Principle 2) or the size/importance of an effect (Principle 5). Instead, it measures how incompatible the data are with a statistical model (Principle 1).
  • Crucial Admonitions: The statement strongly advises against drawing dichotomous conclusions based solely on a fixed threshold (e.g., P<0.05, Principle 3) and requires full reporting and transparency of all results to combat selective reporting (Principle 4).
  • Recommendation: The ASA advocates for a move beyond the single P-value to integrate statistical results with effect sizes, Confidence Intervals, context, and external evidence (Principle 6).
Published

23 January 2026

DOI: 10.1080/00031305.2016.1154108 Overview generated by: Gemini 2.5 Flash, 26/11/2025

Key Findings: Principles for the Proper Use of the p-Value

This article presents and explains the American Statistical Association’s (ASA) Statement on p-Values and Statistical Significance, the first-ever policy statement from the ASA specifically addressing a foundational issue of statistical inference. The statement was prompted by the widespread misuse and misinterpretation of the p-value across all fields of science, which contributes to the crisis of irreproducible research.

Six Core Principles for p-Values

The statement outlines six key principles, aiming to promote better practice and curb common errors:

  1. P-values can indicate how incompatible the data are with a specified statistical model.
    • The p-value measures the incompatibility between the data and the assumed statistical model (usually the null hypothesis). A small p-value indicates that the data are unlikely under the null model.
  2. P-values do not measure the probability that the studied hypothesis is true, or the probability that the data were produced by random chance alone.
    • This directly addresses the most common and disastrous misinterpretation: that a low p-value (e.g., P=0.01) means the null hypothesis has only a 1% chance of being true. P-values relate to data given the model, not the probability of the model given the data.
  3. Scientific conclusions and business or policy decisions should not be based only on whether a p-value passes a specific threshold.
    • The ASA strongly discourages the practice of dichotomizing results into “statistically significant” and “not statistically significant.” This threshold thinking can lead to flawed decisions, ignoring the importance of effect size, context, and other evidence.
  4. Proper inference requires full reporting and transparency.
    • The selective reporting of only “significant” results (known as P-hacking or publication bias) renders the reported p-values meaningless. All analyses, assumptions, and findings, regardless of the p-value, must be disclosed.
  5. A p-value, or statistical significance, does not measure the size of an effect or the importance of a result.
    • Statistical significance is often confused with substantive (clinical, practical, or scientific) importance. A very large study can produce a statistically significant p-value for a trivial effect, while a small study might fail to find significance for a massive, important effect. The focus should be on effect magnitude.
  6. By itself, a p-value does not provide a good measure of evidence regarding a model or hypothesis.
    • Scientific reasoning requires more than just a p-value. It demands contextual knowledge, common sense, study design quality, data reliability, and integration with external evidence. Other statistical measures, particularly Confidence Intervals (CIs) and Bayesian methods, should be used to provide a richer understanding of the evidence.

Purpose of the Statement

The statement is not intended to be a simple replacement for null hypothesis significance testing, but rather a step toward a “post p<0.05 era” where statistical methods are used more thoughtfully. It calls for a move toward better scientific practice characterized by open communication, transparent methodology, and a focus on effect size, precision (CIs), and context.