Ever had an A/B test that "won" on Friday and "lost" by Tuesday? That swing is often real variance, but it's also a sign the team is touching the dials mid-flight. When goals are aggressive and dashboards update in real time, it's easy to chase a green number.
An experiment pre-registration doc fixes that by doing one simple thing: it forces you to write down your intent before you see the outcome. Think of it like sealing your analysis plan in an envelope before you open the results.
What p-hacking looks like in growth teams (and why it happens)
P-hacking in growth work rarely looks like fraud. It looks like "being agile." Common patterns:
- Metric switching: You planned to judge on activation rate, but retention moved, so retention becomes the headline.
- Optional stopping: The test is called early when it looks good, or extended when it doesn't.
- Repeated peeks: You check results daily and stop the moment p < 0.05.
- Post-hoc segments: "It didn't work overall, but it worked for mobile users in Canada."
- Removing "bad" data: Excluding outliers, refunds, or "weird days" after seeing they hurt the result.
These behaviors are so common that many teams barely notice them anymore. But they quietly turn noise into fake wins.
What an experiment pre-registration doc is (for A/B tests)
Pre-registration is popular in academic research, but it maps cleanly to product, marketing, and lifecycle tests. You write down, before launch:
- what you're changing
- what "success" means
- how long you'll run