Skip to main content
← Glossary · Statistics & Methodology

Type I Error

A false positive — concluding that a test variation is a winner when the observed difference is actually due to random chance.

A Type I error (false positive) occurs when you declare a winner that isn't actually better than the control. In A/B testing, this means shipping a change that has no real effect — or worse, one that actually hurts performance but appeared to help due to random variation.

How Common Are False Positives?

At the standard 95% confidence level, Type I errors should occur in about 5% of tests where there's no real difference. But in practice, the rate is much higher due to:

  • Peeking: Checking results before planned completion (the #1 offender)
  • Multiple comparisons: Testing many metrics and cherry-picking the significant one
  • Segment mining: Finding significance in a subgroup that wasn't pre-specified

The Real Cost of False Positives

A false positive doesn't just fail to improve metrics — it occupies a slot in your product that could have been used for something that actually works. It also erodes team trust in the testing program when the "winner" doesn't perform in production as expected.

Prevention Strategies

  • Pre-register your primary metric before the test starts
  • Use sequential testing methods if you must monitor continuously
  • Apply Bonferroni or Benjamini-Hochberg corrections for multiple comparisons
  • Never stop a test early just because it reached significance