Atticus Li leads Applied Experimentation at NRG Energy (Fortune 150), where he runs 100+ experiments per year and generated $30M in verified revenue impact in 2025. He writes about the operational reality of building experimentation programs that survive contact with organizational politics.
I sat in a budget meeting last year where three teams were competing for the same pool of growth funding. Brand marketing presented a study showing their campaigns drove "brand awareness" up 12 points. The performance marketing team showed ROAS of 4.2x on their paid channels. And I presented the experimentation program's $30M in verified revenue impact.
All three teams were asking for more money. Only one of us could prove causation.
The Attribution Problem Nobody Talks About Honestly
Every marketing function has an attribution problem. The question is how big the asterisk is.
Brand marketing generates what it calls "brand value." Awareness goes up. Favorability improves. Net Promoter Score ticks higher. These are measured through surveys, brand tracking studies, and sentiment analysis. The implicit claim is that this awareness eventually converts to revenue.
The asterisk is massive. Brand awareness studies measure correlation, not causation. When your brand awareness goes up at the same time as a product launch, a PR push, and a seasonal demand spike, which one actually drove the business result? Brand can't tell you. It can show you a number went up. It cannot prove that number caused anyone to buy anything.
I'm not saying brand doesn't matter. It clearly does. But the measurement methodology is so indirect that any revenue claim requires a chain of assumptions long enough to stretch across the conference room. The CFO knows this. They nod politely at the brand lift numbers and mentally discount them by 50-80%.
Performance marketing is better positioned. They have attribution models. They can show that a user clicked an ad and then purchased. The ROAS calculation is relatively clean for last-click attribution.
But performance marketing has its own asterisk. Attribution models are flawed. Last-click attribution gives all credit to the final touchpoint, even if the user was already going to buy. Multi-touch attribution distributes credit using models that are essentially educated guesses. The "4.2x ROAS" number includes users who would have converted organically. Incrementality is the real question, and most performance teams can't answer it rigorously.
There's also the platform self-reporting problem. Google tells you how well Google ads performed. Meta tells you how well Meta ads performed. Both platforms have incentives to overcount their own contribution. Cross-platform deduplication is a nightmare that most teams solve with heuristics and hope.
Experimentation operates differently. An A/B test randomly assigns users to groups. One group sees the treatment. The other doesn't. The difference in outcomes between the groups is the causal effect of the treatment. This is the same methodology used in clinical trials. It's the gold standard for establishing cause and effect.
When I say an experiment generated $2M in revenue impact, that means the treatment group produced $2M more revenue than the control group, after accounting for the random assignment and statistical significance thresholds. There's no attribution model. There's no chain of assumptions. There's a controlled experiment that isolated the variable.
Why This Matters in Budget Conversations
CFOs are trained to be skeptical of marketing claims. They've seen enough inflated ROAS numbers and brand lift studies to know that marketing attribution is more art than science. This skepticism is reasonable. Most marketing measurement genuinely cannot establish causation.
Experimentation can. And this is an enormous strategic advantage that most experimentation leaders don't leverage aggressively enough.
When you walk into a budget meeting and say, "My program produced $30M in verified revenue impact through controlled experiments," you're speaking a language the CFO understands. You're presenting evidence, not projections. You're showing causation, not correlation. You're describing a methodology that would hold up to scrutiny from a statistician, not just a marketer.
Every other function in the room struggles with attribution. You have controlled experiments. Use that advantage.
How to Present the Advantage Without Making Enemies
Now, there's a political reality to navigate here. If you walk into the budget meeting and explicitly position experimentation as more rigorous than brand marketing and performance marketing, you'll win the argument and lose the war. Those teams will stop collaborating with you, and collaboration is essential — many of your best experiments optimize the traffic those teams generate.
Here's how I frame it instead.
Lead with the methodology, not the comparison. I don't say "our measurement is better than yours." I say, "One of the unique strengths of experimentation is that our methodology allows us to isolate causal impact through controlled experiments. This gives leadership a high level of confidence in the revenue numbers we report."
Acknowledge the ecosystem. "These experiments don't happen in a vacuum. The traffic that powers our tests comes from the acquisition channels our performance marketing team manages. The brand awareness our brand team builds influences how users respond to the experiences we optimize. Our $30M impact is built on the foundation the entire marketing organization creates."
Frame it as portfolio diversification. "A strong marketing investment portfolio includes brand building for long-term awareness, performance marketing for scalable acquisition, and experimentation for conversion optimization and causal measurement. Each plays a different role, and together they compound."
This framing gets you the budget while preserving the relationships. The CFO understands the implication — you have the strongest evidence — without you needing to say it explicitly. And the other teams don't feel attacked.
The Specific ROI Calculation
Here's the formula I use to present experimentation ROI to finance.
Total verified revenue impact from winning experiments in the fiscal year. This is the sum of incremental revenue generated by each winning test, calculated as the difference in revenue per user between treatment and control, multiplied by the total user population that saw the winning variant after it was shipped.
Minus the total cost of the experimentation program. This includes headcount, tools, platform costs, and the opportunity cost of development resources used for test implementation.
The ratio is the program ROI. At NRG, our program consistently delivers 10x+ return on investment. For every dollar spent on the experimentation team, we generate at least ten dollars in verified incremental revenue.
Try getting that number from a brand awareness study. You can't, because the measurement doesn't support it. Not because brand doesn't work — because brand measurement can't isolate causation.
The CFO Talk Track
When I present to finance, I use this structure.
Start with the number. "$30M in verified revenue impact." Don't bury the lead. Finance people respect directness.
Explain the methodology. "This is measured through controlled A/B experiments where we randomly assign users to groups, measure the difference in revenue between groups, and only count results that meet our statistical significance threshold. It's the same methodology used in clinical trials."
Show the ROI. "The fully-loaded cost of the program is approximately $X. That's an ROI of [Y]x. For every dollar invested, the program returns [Y] dollars in verified incremental revenue."
Compare to alternatives. "Most marketing functions measure correlation or attribution-model-adjusted estimates. Experimentation is one of the few functions that can demonstrate causal revenue impact. This gives the organization high-confidence data for decision-making."
Make the ask. "With additional investment of $X, we can expand the program to [additional channels/products/markets], with a projected return of [Y] based on our historical conversion rate of experiments to revenue."
The CFO will ask tough questions. That's their job. But the rigor of the methodology gives you answers that other functions can't provide. Lean into that.
The Compounding Effect Nobody Mentions
There's one more dimension to experimentation ROI that doesn't show up in any other marketing function's math: compounding.
When you ship a winning experiment, that improvement is permanent. It doesn't stop working when you stop spending money. A 5% conversion rate improvement shipped in January is still generating incremental revenue in December. And in the following January. It compounds.
Brand awareness decays when you stop advertising. Performance marketing stops generating leads when you stop paying for clicks. Experimentation improvements persist indefinitely.
The total lifetime value of a shipped experiment is the annualized impact multiplied by the expected lifespan of the change. For most digital experiences, that's measured in years, not months. The $30M in 2025 impact will continue generating revenue in 2026 and beyond.
That's the real argument. Not just ROI in a single year. Permanent, compounding improvements that other functions can't match.
---
_Ready to quantify the ROI of your experimentation program? GrowthLayer's experimentation calculators help you project revenue impact, calculate statistical significance, and build the business case for testing — all for free._