Skip to main content
← Glossary · Statistics & Methodology

Type II Error

A false negative — concluding that a test variation has no effect when it actually does have a real, meaningful impact.

A Type II error (false negative) occurs when your test fails to detect a real improvement. The variation actually works, but your test says "no significant difference." This is the silent killer of experimentation programs.

Why Type II Errors Are More Dangerous Than Type I

Type I errors (false positives) are visible — you ship a change and see it doesn't perform. Type II errors are invisible — you reject a good idea and never know what you missed. The opportunity cost is unknowable.

The Primary Cause: Underpowered Tests

Nearly every Type II error in practice comes from insufficient sample size. Teams launch tests, get impatient after a week, see no significance, and call it "inconclusive." But the test never had enough power to detect anything less than a massive effect.

How to Minimize Type II Errors

  • Calculate sample size before you start and commit to running the full duration
  • Use 80% power minimum (90% for high-stakes tests)
  • Choose realistic MDEs — if you need to detect 3% lift, make sure your test is powered for that
  • Don't conflate "no significance" with "no effect" — absence of evidence is not evidence of absence