Minimum Detectable Effect
The smallest difference between control and variation that your test is designed to reliably detect — determined by sample size, power, and baseline rate.
Minimum Detectable Effect (MDE) is perhaps the most important pre-test decision in experimentation. It answers the question: "What's the smallest improvement that would make this change worth shipping?"
How MDE Shapes Your Testing Program
Your MDE determines how many tests you can run and what kinds of changes you can validate. A team that sets MDE at 2% relative lift needs much larger samples (and thus fewer tests per year) than a team comfortable with 10% MDE.
The MDE-Revenue Connection
Here's how I frame MDE for business stakeholders: "If this change improves conversion by X%, is that worth the engineering effort to build and maintain it?" A 1% relative lift on a high-traffic page might be worth millions. A 1% lift on a low-traffic feature page might not be worth the code complexity.
Setting Realistic MDEs
- Macro conversions (purchase, signup): 5-15% relative MDE is realistic for most sites
- Micro conversions (clicks, form starts): 3-5% MDE is achievable with moderate traffic
- Revenue per session: 2-5% MDE, but requires careful metric definition
The Trap of Chasing Small Effects
Some teams set tiny MDEs to "catch everything." This sounds rigorous but is practically counterproductive — tests run for months, blocking other experiments, and even when they find a tiny lift, the confidence interval is so wide that the true effect could be negligible.