Skip to main content
← Glossary · Experimentation Strategy

Hypothesis-Driven Development

A product development approach where every feature or change starts with a falsifiable hypothesis about user behavior, and shipping is contingent on experimental validation.

What Is Hypothesis-Driven Development?

Hypothesis-driven development (HDD) inverts the traditional product development flow. Instead of "build it, ship it, hope it works," HDD requires teams to articulate what they expect to happen and why before writing any code. The hypothesis becomes the acceptance criterion — the feature isn't "done" when it's built, but when the hypothesis has been tested.

HDD transforms product debates from opinion battles into testable disagreements. When two teams disagree about what users want, the answer isn't "who's more senior" — it's "let's test both hypotheses."

Also Known As

  • Marketing: Hypothesis-driven marketing, evidence-based campaigns
  • Sales: Hypothesis-based selling, discovery-driven selling
  • Growth: HDD, experiment-driven growth
  • Product: Discovery-driven planning, lean product development
  • Engineering: Hypothesis-driven engineering, build-measure-learn engineering
  • Data: Evidence-based development, validated learning

How It Works

A product team proposes a new notifications feature. Under HDD, the feature brief must include a hypothesis: "We believe that sending an activity digest email to inactive users 7 days after signup will increase 14-day activation by 8–12% because the digest creates a reason to return before habitual non-use solidifies."

Engineering builds the minimum version needed to test the hypothesis. The team ships to 10% of users, measures 14-day activation, and compares to the control. If the hypothesis is confirmed, the feature graduates to full rollout. If refuted, the team learns something specific and pivots — rather than shipping the feature anyway because "we already built it."

Best Practices

  • Make hypothesis a required field in feature briefs — no hypothesis, no approval.
  • Connect hypotheses to input metrics that feed the North Star.
  • Close the loop after launch — was the hypothesis confirmed, refuted, or inconclusive?
  • Reject features that can't articulate a testable prediction — they're probably not worth building.
  • Treat hypothesis quality as a leading indicator of team maturity.

Common Mistakes

  • Writing hypotheses after the fact to satisfy process — teams learn nothing from retroactive predictions.
  • Shipping features that refute their hypothesis anyway because of sunk cost.
  • Vague hypotheses that can't be falsified — "users will like this" is not testable.

Industry Context

SaaS/B2B: HDD is especially powerful for B2B because feature decisions are expensive and wrong decisions are costly. Every feature should have a testable theory of value.

Ecommerce/DTC: HDD pairs naturally with high-velocity testing — the hypothesis is the test brief.

Lead gen: HDD works at landing page scale — every new page concept is a hypothesis about what will convert the targeted traffic.

The Behavioral Science Connection

HDD depoliticizes product decisions by removing authority bias — where ideas from senior people get adopted simply because of their seniority, not their merit. When every proposal requires a testable hypothesis, seniority doesn't substitute for evidence. This is the same mechanism that gives peer review its power in science.

Key Takeaway

HDD discipline compounds over time — building an organizational understanding of customer behavior that no competitor can replicate without the same investment.