Skip to main content
← Glossary · Analytics & Attribution

Difference-in-Differences

A quasi-experimental method that estimates causal effects by comparing the change in outcomes over time between a treatment group and a control group.

What Is Difference-in-Differences?

Difference-in-differences (DiD or diff-in-diff) estimates a causal effect by comparing the before-and-after change in a treated group to the before-and-after change in a control group. By "differencing the differences," it removes both time-invariant group-level differences and common time trends, isolating the effect attributable to the treatment itself.

Also Known As

  • Marketing team: "pre-post test with control"
  • Sales team: "before-and-after regional comparison"
  • Growth team: "DiD analysis," "diff-in-diff"
  • Data team: "difference-in-differences," "two-way fixed effects"
  • Finance team: "controlled before-after analysis"
  • Product team: "staggered rollout analysis"

How It Works

You launch a new onboarding flow for US users in March but keep the old flow for Canadian users. Pre-March conversion rates: US 5.0%, Canada 3.0%. Post-March: US 7.0%, Canada 3.5%. Naive comparison says US improved by 2 pp. But Canada also rose (by 0.5 pp) due to seasonality. DiD calculates: (US change) - (Canada change) = 2.0 - 0.5 = 1.5 pp. The causal effect of the new onboarding flow is 1.5 percentage points, not 2.

Best Practices

  • Verify parallel pre-trends using at least 4-6 time periods before treatment.
  • Use event-study plots to visualize whether effects are consistent or drift over time.
  • Cluster standard errors at the level of treatment assignment (state, region, cohort).
  • Check for anticipation effects — the treated group changing behavior before the official rollout.
  • Combine DiD with synthetic control when parallel-trends assumption is shaky.

Common Mistakes

  • Assuming parallel trends without checking them visually.
  • Ignoring compositional changes — who's in the treatment group may change over time.
  • Under-clustering standard errors, which overstates precision.

Industry Context

SaaS and B2B teams use DiD for staged feature rollouts and regional pricing tests. Ecommerce and DTC use DiD for policy changes (free shipping rollouts, return-policy updates) and store-level interventions. Lead gen operations apply DiD when evaluating territory-level sales enablement or regional advertising.

The Behavioral Science Connection

DiD captures a behavioral reality: people's behavior shifts with seasons, economic conditions, and cultural events. A naive before-after comparison confuses environmental shifts with treatment effects. This is a form of the attribution bias behavioral economists warn about — mistaking environmental context for agent-driven change. DiD controls for the environment by using the control group as a time-trend benchmark.

Key Takeaway

DiD is the workhorse of quasi-experimental causal inference — use it whenever you have a staggered rollout and a credible parallel-trends argument.