Skip to main content
Free Tool

A/B Test Sample Size
Calculator

Calculate exactly how many visitors you need for a statistically valid A/B test. Uses the standard two-proportion z-test — the same math behind Optimizely, VWO, and every serious experimentation platform.

Calculator

Configure Your Test

Your current conversion rate
Relative lift you want to detect (e.g. 10% means 5.0% → 5.5%)
Confidence level (1 − alpha)
Probability of detecting a real effect (1 − beta)
Including control (e.g. 2 = control + 1 variant)
To estimate test duration
Per Variant visitors needed
Total Sample visitors needed
Methodology

How This Calculator Works

Two-Proportion Z-Test

The standard method for comparing two conversion rates. It calculates the sample size needed so that, if a real difference of your specified MDE exists, the test will detect it with the chosen power and significance level.

Bonferroni Correction

When testing more than one variant against control, the significance level is adjusted using the Bonferroni correction to maintain the overall false positive rate. More variants means more comparisons, which means more data needed.

Conservative Estimates

Results are rounded up to the nearest whole number. Real-world factors — traffic fluctuations, bot traffic, day-of-week effects — mean you should treat these as minimums, not targets.

FAQ

Common Questions

How do I calculate sample size for an A/B test?

To calculate sample size, you need four inputs: your baseline conversion rate, the minimum detectable effect (the smallest improvement worth detecting), your desired statistical significance level (typically 95%), and statistical power (typically 80%). This calculator uses the standard two-proportion z-test formula to compute the required sample size per variant, then multiplies by the number of variants for the total.

What is minimum detectable effect (MDE) in A/B testing?

Minimum detectable effect is the smallest relative change in your conversion rate that your test is designed to reliably detect. For example, if your baseline conversion rate is 5% and your MDE is 10% relative, the test is powered to detect a change from 5.0% to 5.5% (a 0.5 percentage point absolute change). Choosing a smaller MDE requires a larger sample size but lets you detect subtler improvements.

How long should I run my A/B test?

Test duration depends on your required sample size and daily traffic. Divide total sample size by your daily visitor count to estimate the number of days needed. As a rule of thumb, always run tests for at least 7 days to account for day-of-week effects, and ideally 2-4 weeks. Never stop a test early just because you see a significant result — that inflates your false positive rate.

Why does a lower MDE require more sample size?

Detecting a smaller effect size is like listening for a quieter signal in the same amount of noise. The statistical test needs more data points to distinguish a real small effect from random variation. This is why it is critical to decide your MDE based on business impact — what is the smallest lift that would justify the cost of implementing the change?

Need help designing experiments?

Sample size is just the starting point. I help growth teams build experimentation programs that connect every test to revenue — with statistical rigor to prove it worked.

Explore Services Subscribe to Newsletter
Lean Experiments Newsletter

Revenue Frameworks
for Growth Leaders

Every week: one experiment, one framework, one insight to make your marketing more evidence-based and your revenue more predictable.