Skip to main content
← Glossary · Statistics & Methodology

Practical Significance

Whether an A/B test result is large enough to matter for the business — distinct from statistical significance, which only tells you if an effect exists.

Practical significance answers the question that statistical significance doesn't: "Is this result big enough to care about?" A test might be statistically significant (the effect is real) but practically insignificant (the effect is too small to justify implementation).

The Distinction That Matters

  • Statistical significance: "There is an effect" (p < 0.05)
  • Practical significance: "The effect is large enough to act on"

A 0.1% conversion lift might be statistically significant with a large enough sample, but if it doesn't move the revenue needle, it's not practically significant.

How to Evaluate Practical Significance

  • Calculate the absolute impact: Convert the relative lift to actual revenue or conversions per month
  • Compare against implementation cost: Engineering time, code complexity, maintenance burden
  • Consider the confidence interval: If the lower bound of the 95% CI is still meaningful, the case is strong
  • Factor in opportunity cost: Could the engineering resources produce more value on another test?

The Decision Framework I Use

I categorize every test result into four quadrants:

| | Statistically Significant | Not Significant |
|---|---|---|
| Practically Significant | Ship it | Need more data |
| Not Practically Significant | Don't ship | Kill it |

The top-left is the clear winner. The top-right needs a longer test. The bottom row gets stopped regardless.