Skip to main content
← Glossary · Experimentation Strategy

Experiment Review Board

A cross-functional group that reviews experiment proposals for methodological rigor, ethical concerns, and strategic alignment before tests are approved to launch.

What Is an Experiment Review Board?

An experiment review board (ERB) is the experimentation program's quality gate — a small group of experienced practitioners who review test proposals before they launch. Done well, it catches methodological errors, prevents dark patterns, and raises the quality of every test. Done poorly, it becomes a bottleneck that kills test velocity.

The ERB is advisory, not authoritative. Its job is to make tests better, not to decide whether teams can test.

Also Known As

  • Marketing: Campaign review board, marketing experiment council
  • Sales: Sales experiment committee
  • Growth: Growth council, experimentation council
  • Product: Experiment council, test review committee
  • Engineering: Rollout review board, change advisory board (adapted)
  • Data: Analysis review board, methodology review

How It Works

A team submits a test proposal to the ERB: a new pricing page variant that reframes the free tier as "limited" rather than "free." The ERB reviews in 48 hours and flags three issues: (1) the sample size calculation assumes a 5% effect but historical pricing tests show 2% detectable effects, (2) the "limited" framing might degrade brand trust (ethical review flags a potential dark pattern), and (3) another test on the pricing page is concluding next week and should complete first.

The team revises: increases runtime for adequate power, softens the "limited" framing to avoid dark pattern concerns, and sequences behind the concluding test. The test launches a week later — methodologically stronger and coordinated with concurrent work.

Best Practices

  • Rotating membership — a statistician, UX practitioner, and senior product leader with rotating seats.
  • Short SLAs — 24–48 hour turnaround for standard tests.
  • Actionable feedback, not just critique — "here's how to fix this," not "this is wrong."
  • Tiered review — not every test needs full ERB; tier by risk.
  • Measure ERB impact — is review actually improving test quality?

Common Mistakes

  • ERB as bottleneck — slow review kills test proposals.
  • Nitpicking without offering alternatives — teams learn to view the ERB as an obstacle.
  • Opaque decisions — reviews should be visible to the whole team, not private judgments.

Industry Context

SaaS/B2B: ERBs are most valuable where ethical stakes are high (enterprise customers, regulated industries).

Ecommerce/DTC: ERBs coordinate high-volume testing programs and catch the methodological shortcuts that emerge under velocity pressure.

Lead gen: Most small teams don't need formal ERBs; a single senior reviewer is sufficient.

The Behavioral Science Connection

ERBs counter overconfidence bias — the systematic tendency to overestimate the quality of one's own design. Every team believes their test is well-designed; ERB review reveals that most tests have improvable weaknesses. The review process converts internal overconfidence into external feedback loops.

Key Takeaway

An effective ERB frames its role as "helping you run a better test," not "deciding whether you can test" — and its success is measured by test quality improvement, not veto counts.