Skip to main content
← Glossary · Statistics & Methodology

Minimum Detectable Effect (MDE)

The smallest true effect an experiment can reliably detect given its sample size, significance level, and desired statistical power.

What Is Minimum Detectable Effect (MDE)?

Minimum Detectable Effect is the smallest lift your test can credibly detect at your chosen alpha and power. It is not an aspiration — it is a mathematical floor. If your MDE is 5% and the true effect is 2%, your test will usually return a flat or inconclusive result even when the variant is genuinely better. MDE is the gatekeeper between experiments that matter and experiments that waste a quarter.

Also Known As

  • Data science: MDE, minimum detectable lift, detectable delta
  • Growth: smallest "win" we can see, sensitivity floor
  • Marketing: "what size change will this test actually catch?"
  • Engineering: detection threshold, effect sensitivity

How It Works

Suppose your baseline checkout conversion rate is 4%, you run 50,000 visitors per variant, alpha is 0.05, and power is 0.80. Plugging into a standard two-proportion z-test formula, your MDE is roughly a 0.35 percentage-point absolute lift — about 8.75% relative lift. That means a variant that truly lifts conversions by 5% relative will likely fail to reach significance. To detect a 5% relative lift on this baseline you would need roughly 160,000 visitors per variant.

MDE scales inversely with the square root of sample size: doubling traffic does not halve MDE, it divides it by about 1.41.

Best Practices

  • Calculate MDE before launch, not after. Use a power calculator at the design stage and publish it in the test doc.
  • Compare MDE to realistic business expectations. If historical winners average 3% relative lift and your MDE is 10%, you are designed to fail.
  • Budget by traffic, not calendar time. A "two-week test" is meaningless without the sample count.
  • Report MDE with every inconclusive result. Flat is not failure — it is information about what you ruled out.
  • Use one-sided MDE only when directional hypotheses are truly defensible.

Common Mistakes

  • Confusing observed effect with detectable effect. A 6% observed lift with MDE of 8% is noise, not a win.
  • Running low-traffic pages with CRO goals. Pages with 2,000 weekly visitors cannot detect anything below ~20% lift in reasonable timeframes.
  • Lowering power to 0.5 or alpha to 0.2 to "get answers." This just increases false positives — you have not detected anything, you have fooled yourself.

Industry Context

In SaaS/B2B, low-volume funnels make MDE the central design constraint; most B2B experiments are underpowered and the honest answer is often "measure leading indicators instead." In ecommerce, traffic is plentiful but baseline conversion is low (1–3%), so MDE calculations dominate seasonal test calendars. In lead gen, MDE interacts with offline conversion delays — you need to size for the slowest downstream event you care about, not the click.

The Behavioral Science Connection

Stakeholders suffer from the illusion of validity — a belief that any number from a dashboard is meaningful. MDE is the antidote: it forces an honest conversation about what the data can and cannot say. It also fights confirmation bias. When a team "sees" a 4% lift in a test sized to detect 10%, they are pattern-matching on noise. Publishing MDE alongside every readout reframes the conversation from "did it win?" to "what did we learn we could rule out?"

Key Takeaway

MDE is the honesty check on every experiment. Calculate it before you launch, report it with every result, and kill tests that are mathematically incapable of answering the question you care about.