If your A/B test says "+6% conversion," the first question I ask to ensure trustworthy results isn't "Is it significant?" It's "Did you actually perform random allocation the way you think you did?"

Sample ratio mismatch (SRM) is the quiet failure mode that turns clean-looking results into expensive mistakes. It shows up when variants don't receive the expected share of eligible users, like a 50/50 test that lands 53/47. Selection bias is often the culprit. That sounds small. In practice, it often means assignment broke, filtering changed after assignment, or tracking dropped unevenly.

When I'm on the hook for revenue, SRM is a stop sign. I'd rather throw away a week of data than ship a pricing or onboarding change based on corrupted randomization.

Why sample ratio mismatch is a business problem, not a stats detail

!Single analyst at a modern office desk reviews an A/B test dashboard displaying imbalanced sample ratios on a laptop angled away from the viewer, with subtle charts, a nearby coffee mug, black-and-white style accented by blue screen glow, and soft natural lighting.