Skip to main content
← Glossary · Statistics & Methodology

Variance Reduction Techniques

Methods that shrink the noise in experiment metrics — including CUPED, stratification, post-stratification, and control variates — to improve sensitivity.

What Is Variance Reduction?

Variance reduction is any technique that makes experiment metrics less noisy without changing what they measure. Lower variance means tighter confidence intervals, smaller MDEs, and shorter tests — often the single highest-ROI investment an experimentation team can make after basic methodology is in place. Common techniques: CUPED, stratification, post-stratification, control variates, and ratio-metric delta methods.

Also Known As

  • Data science: noise reduction, sensitivity enhancement, adjustment methods
  • Growth: "making tests faster without more traffic"
  • Marketing: signal improvement
  • Engineering: estimator efficiency improvements

How It Works

Baseline checkout conversion is 6%, variance per user is roughly 0.06 * 0.94 = 0.056. With 20,000 per arm, standard error is ~0.24pp and MDE is ~0.66pp. Apply stratification on device type (desktop converts at 9%, mobile at 4%): the within-stratum variances are smaller, and the pooled estimator is roughly 20% more efficient. Now stack CUPED using 30-day pre-period sessions: another 25% reduction. Combined, your MDE drops to ~0.44pp — a 33% improvement on the same traffic.

Best Practices

  • Start with post-stratification on a single high-signal dimension — easy to implement, low risk.
  • Layer CUPED on continuous metrics. The largest gains come from user-level behavioral covariates.
  • Validate every variance reduction estimator against A/A tests. Bias here silently breaks every readout.
  • Document which metrics use which adjustments so readouts are interpretable across the org.
  • Do not apply variance reduction to metrics you will only read once — the engineering cost isn't worth it.

Common Mistakes

  • Using adjustment variables correlated with treatment. Post-treatment bias destroys the estimator.
  • Applying post-stratification without the right weights. Unweighted strata collapse back to naive averages.
  • Chasing techniques before fixing basic methodology. CUPED on a program that peeks daily at p-values is putting a spoiler on a car with flat tires.

Industry Context

In SaaS/B2B, variance reduction is essential because traffic is scarce. A 40% variance reduction can be the difference between a tractable roadmap and an intractable one. In ecommerce, variance reduction on revenue metrics unlocks experiments the team otherwise would not even attempt. In lead gen, ratio-metric deltas and stratification by lead source are the meat and potatoes of realistic test sizing.

The Behavioral Science Connection

Attention is a finite resource and variance is an attentional tax. Every point of ambient noise forces stakeholders to squint at results, pattern-match on trends, and argue about interpretation. Variance reduction is less a statistical trick than a cognitive-load reduction — clearer signals produce better decisions and less political debate at the readout meeting.

Key Takeaway

If you have basic experimentation hygiene in place, variance reduction is the next-highest-ROI investment. It quietly accelerates your entire program without hiring more traffic.