Network Effects in Experiments
Interference between experiment groups that occurs when one user's treatment affects another user's behavior, violating the independence assumption required for valid A/B tests.
What Are Network Effects in Experiments?
Network effects in experiments — also called interference or spillover — occur when a user's treatment assignment affects the outcomes of other users. This violates the Stable Unit Treatment Value Assumption (SUTVA), a foundational requirement for causal inference. When SUTVA is violated, standard A/B test estimators are biased — often in ways that are impossible to quantify without specialized designs.
Also Known As
- Marketing teams call it spillover, contagion, or cross-contamination.
- Growth teams say network effects, spillover, or interference.
- Product teams use interference, network effects, or SUTVA violation.
- Engineering teams refer to spillover, leakage, or cross-user interference.
- Data science teams call it SUTVA violation, interference, or spillover.
How It Works
You're testing a new social sharing feature on a social product. 50% of users get the new share UI. A treated user shares a post to 100 control users — but those control users are now seeing content they wouldn't have seen without the treatment. Their engagement metrics rise even though they never saw the variant. Your control group is "contaminated" upward, which makes the measured treatment effect appear smaller than it truly is. Standard A/B test math gives a biased estimate; the lift you'd see shipping to everyone is larger than your test reports.
Best Practices
- Identify whether your product has network effects before designing experiments.
- Use cluster randomization — assign whole connected groups (friend clusters, geographies) to the same variant.
- Consider switchback designs that alternate treatment across time rather than users.
- For two-sided marketplaces, randomize on one side only and measure effects on both.
- Model the network structure explicitly when cluster randomization isn't feasible.
Common Mistakes
- Running standard user-level A/B tests on social or marketplace products and trusting the results.
- Assuming spillover is "small enough to ignore" without measuring it.
- Using geographic cluster randomization on a product where geography is unrelated to the network.
Industry Context
- SaaS/B2B: Team-based products have account-level network effects; randomize at account level.
- Ecommerce/DTC: Marketplaces (buyer-seller) have two-sided network effects; single-side randomization often needed.
- Lead gen: Usually minimal — users rarely interact with each other pre-conversion.
The Behavioral Science Connection
Network effects capture a fundamental truth about human behavior: we're influenced by what others around us do. Social proof, information cascades, and peer effects are all forms of network influence. Standard A/B tests assume we're measuring individual behavior in isolation; network effects remind us that individual behavior is embedded in social context.
Key Takeaway
If one user's treatment can affect another user's outcome, standard A/B tests are biased — use cluster randomization or switchback designs to recover causal inference.