Skip to main content
← Glossary · A/B Testing

Sample Ratio Mismatch (SRM)

A diagnostic check that detects when the actual traffic split in an A/B test differs significantly from the intended split — a red flag that invalidates experiment results.

What Is Sample Ratio Mismatch?

Sample Ratio Mismatch (SRM) is a diagnostic check that compares the actual traffic split of an experiment to the intended split. If you set up a 50/50 test and observe 53/47, small deviations are expected. But if you observe 55/45 with adequate traffic, something is systematically broken — and no amount of statistical analysis can correct the resulting selection bias. SRM is the single most important quality check in experimentation, and most teams skip it.

Also Known As

  • Marketing teams rarely use the term — they call it "traffic split issue" or "test imbalance."
  • Growth teams say SRM or traffic mismatch.
  • Product teams use SRM or assignment imbalance.
  • Engineering teams refer to bucket mismatch or randomization bug.
  • Data science teams call it SRM, traffic imbalance, or assignment bias.

How It Works

Your platform says 50/50 assignment. After 100,000 users you observe 52,100 in control and 47,900 in variant. A chi-squared test gives p = 0.000003 — hugely significant SRM. The cause turns out to be a redirect on iOS Safari that was silently dropping variant users due to Intelligent Tracking Prevention. Those dropped users correlate with iOS purchasers (higher intent, higher AOV), which means the remaining variant group is systematically different from control. Any lift you measure is an artifact of the assignment bug, not the change you tested.

Best Practices

  • Run an SRM chi-squared check as the first step of every analysis, before looking at conversion rates.
  • Alert on SRM daily during experiments so you catch issues early, not after the test concludes.
  • Set your SRM alert threshold at p < 0.001 to avoid false alarms from normal variance.
  • Investigate the root cause before restarting — don't just rerun and hope it's gone.
  • Track SRM frequency across your program as a platform health metric.

Common Mistakes

  • Skipping the SRM check entirely and shipping variants based on contaminated data.
  • Dismissing SRM because "the imbalance is small" — statistical significance already accounts for that.
  • Fixing the symptom (reweighting traffic) without diagnosing the underlying bug.

Industry Context

  • SaaS/B2B: Common SRM causes include consent banners, SSO redirects, and account-based routing.
  • Ecommerce/DTC: Bot filtering, ITP cookie dropping, and CDN caching are the usual suspects.
  • Lead gen: Form submission tracking failures and ad platform redirect loss.

The Behavioral Science Connection

SRM is the epistemic humility check — the acknowledgement that our infrastructure can silently lie to us, and that we must actively look for evidence of our own ignorance. Dunning-Kruger tells us we're confident when we shouldn't be; SRM detection is the antidote at the experiment level.

Key Takeaway

If SRM is detected, the experiment is invalid — no amount of "but the results look great" justifies shipping a test with a broken randomization.