Atticus Li has presented experimentation results to C-suite executives at NRG Energy and Silicon Valley Bank, learning that the biggest challenge isn't the analysis — it's translating data into decisions for people seeing it for the first time. This post covers the specific techniques that bridge the gap.

The Textbook Analogy

Here's the mistake I made early in my career, and the mistake I see analytics professionals make constantly: you spend three weeks analyzing data, build a deck with 35 slides of charts and findings, walk into the executive meeting, and present it all.

You just handed someone a textbook and asked them to pass a test.

Think about what you're actually doing. You have spent weeks immersed in this data. You understand every chart because you built it. You know why that one segment behaves differently because you investigated it for two days. You understand the methodology because you designed it.

The executive in the room has none of that context. They walked in from a meeting about supply chain costs. They have 30 minutes before their next one-on-one. They're looking at your chart and trying to figure out what the y-axis means while you're already explaining the third insight on slide 12.

You've lost them. Not because they're not smart — they're usually smarter than you about the business as a whole. You've lost them because you presented information in the order you discovered it, not in the order they need to receive it.

This realization changed how I present data, how I build reports, and ultimately how I got buy-in for growing NRG's experimentation program from 20 tests per year to 100+ per year.

Rule 1: Surface the Most Important Things

Your analysis probably uncovered 15 findings. Maybe 20. You're proud of all of them because each one took effort to uncover. But here's the uncomfortable truth: only 3-4 of those findings matter for the decision the executive needs to make.

Your job is to figure out which 3-4, and lead with those.

This is an editing problem, not an analysis problem. And it requires you to understand what decision is being made. Are they deciding whether to invest more in experimentation? Lead with the revenue impact of recent tests. Are they deciding which brand to prioritize? Lead with the conversion rate and revenue-per-customer comparison across brands. Are they deciding whether to approve a specific test? Lead with the projected impact and the risk of not testing.

At NRG, I produce detailed executive reports that communicate testing insights, highlight performance, and project the impact of ongoing optimization. But those reports aren't data dumps. Every chart, every number, every bullet point is there because it serves the decision being made in that meeting.

What I cut is just as important as what I include. I cut methodology sections (unless someone asks). I cut "interesting but not actionable" findings. I cut segment analyses that don't change the recommendation. I cut anything that makes me look thorough but doesn't help the audience do their job.

The test I apply to every slide: If I remove this slide, does the recommendation change? If the answer is no, the slide comes out.

Rule 2: Lead with Recommendations, Not Data

Most analysts present like this:

  1. Here's the data
  2. Here's what we found
  3. Here's what we recommend

Flip it. Lead with the recommendation:

  1. Here's what I recommend we do
  2. Here's why (the key finding that supports it)
  3. Here's the data if you want to go deeper

This isn't about being presumptuous. It's about respecting the audience's time and cognitive load. Executives are decision-makers. They want to know what you think they should do. Then they want to understand why. Then, if they disagree or want to probe, they want the supporting data.

When I present test results at NRG, the first slide after the title is always: "Recommendation: Ship variant B. Projected annual impact: $299K. Here's why." The next 3-4 slides support that recommendation. The appendix has the full data for anyone who wants to dig in.

Most of the time, the conversation happens on that first slide. The executive asks questions, I answer them, and we move to a decision. The supporting slides exist as backup, not as the presentation.

Your perspective matters. This is the part that takes confidence to internalize. You're not just delivering data — you're the person who understands this data better than anyone in the room. Your recommendation carries weight. If you present data without a recommendation, you're implicitly saying "I don't know what to do with this." That doesn't build confidence in your program.

Rule 3: Be a Consultant, Not a Reporter

This is the distinction that changed my career. A reporter delivers information. A consultant guides decisions.

When I was at SVB, I initially operated as a reporter. I built dashboards, generated weekly reports, and presented findings. The reports were accurate. The dashboards were well-designed. But the impact was limited because I was waiting for stakeholders to tell me what to do with the data.

The shift happened when I started coming to meetings with a point of view. Not just "here's the data" but "here's what I think we should do, and here's why the data supports it." That's a fundamentally different relationship with stakeholders.

But here's what makes this nuanced: Being a consultant doesn't mean being a dictator. The executives have context you don't have. They know about budget constraints you're not aware of. They know about strategic priorities that haven't been communicated to your level yet. They have relationships with customers and partners that inform their perspective.

So the consultant mindset is: "Here's my recommendation based on the data. What context am I missing?" That question — what context am I missing — transforms the conversation from a presentation into a collaboration. The executive feels respected because you're acknowledging their expertise. And you get information that makes your next analysis better.

At NRG, this approach built the trust that let me grow the program. When I recommended scaling experimentation to a new brand, the finance team didn't just look at my numbers. They trusted that I'd thought about the implications and that I was open to hearing what I'd missed.

The NRG Approach: Executive Reports That Drive Action

Let me get specific about what my executive reporting looks like at NRG.

I produce detailed executive reports using data storytelling techniques to communicate testing insights. These reports are not slide decks with charts — they're narrative documents that tell a story with data as supporting evidence.

The structure:

Executive summary (half a page): What we tested, what happened, what I recommend. Anyone who reads only this section should have enough to make a decision.

Key findings (1-2 pages): The 3-4 most important insights, each with a supporting data point and a clear "so what." Not "Variant B had a 12% higher conversion rate" but "Variant B's simplified enrollment flow added approximately 100 enrollments over 46 days, projecting to $299K in annual revenue."

Performance highlights: Which tests won, which lost, what we learned from the losses. Losses are not failures — they're information that prevents bad decisions. I frame them that way explicitly.

Impact projection: Using Atticus Li's PRISM Method, every test is tied to projected annual revenue impact. This section connects individual test results to the program's cumulative value, reinforcing that experimentation isn't a cost center — it's a revenue driver.

Next steps: What's in the pipeline, what we need from stakeholders, what decisions are pending.

The language matters. I avoid jargon unless the audience uses it. "Bayesian probability" becomes "confidence level." "Statistical significance" becomes "we're confident this result is real, not random." "MDE" becomes "the smallest improvement worth detecting." Translation isn't dumbing down — it's respecting the audience's domain expertise, which is business strategy, not statistics.

The SVB Approach: Automating Self-Service

At Silicon Valley Bank, the challenge was different. The marketing team was generating 6 separate manual reports every week. Each report took hours to compile, and by the time it was distributed, the data was already stale. Fifteen non-analytics team members needed access to marketing performance data but had to wait for the analytics team to pull it for them.

I consolidated those 6 manual reports into 1 automated Looker dashboard. The dashboard updated daily. It was designed for self-service — anyone with access could filter by campaign, channel, time period, or segment without asking the analytics team for a custom pull.

The result was an 85% efficiency boost in reporting time. But the bigger win was cultural. When 15 team members can access their own data, they start asking better questions. Instead of "what happened last week?" they ask "why did email performance drop on Tuesday?" They're already past the data and into the analysis because the data is always available.

Design principles for self-service dashboards:

  • Start with the question, not the data: Each section of the dashboard answered a specific business question. "How are our campaigns performing?" not "Here's a table of metrics."
  • Progressive disclosure: The top level showed KPIs. Clicking into a KPI showed the trend. Clicking into the trend showed the breakdown by segment. This layered approach prevented information overload while giving power users the depth they wanted.
  • Consistent visual language: Every chart used the same color coding. Green meant good, red meant investigate. No one had to re-learn the visual system when moving between sections.
  • Built-in context: Annotations on charts explained anomalies. A spike in traffic on July 4th had a note: "Holiday traffic pattern — not comparable to normal periods." This prevented 15 people from independently asking the analytics team the same question.

Tools and Techniques for Data Storytelling

Beyond the structural principles, here are specific techniques that make the difference:

Chart selection: This matters more than most people think. A bar chart compares categories. A line chart shows trends over time. A scatter plot shows relationships between variables. Using the wrong chart type forces the audience to decode the visualization before they can understand the insight. I've watched executives stare at a stacked area chart for 30 seconds trying to figure out what it means. That's 30 seconds of cognitive budget wasted on decoding instead of deciding.

The "so what" test: Before every chart goes into a presentation, I write the "so what" statement. If I can't complete the sentence "This chart shows that __, which means we should __," the chart isn't ready. Sometimes the chart is fine but the insight isn't clear. Sometimes the insight is clear but the chart doesn't effectively visualize it. Either way, the chart doesn't go in until both parts are solid.

Narrative structure: Data presentations should follow a story arc. Situation (where we are), complication (the problem or opportunity), resolution (the recommendation). This is basic storytelling structure, and it works because humans are wired to follow stories, not data tables.

Annotations over legends: Instead of making people cross-reference a legend, I annotate directly on the chart. An arrow pointing to the inflection point with "Variant B launched here" communicates faster than a legend entry with a date that the audience has to match to the x-axis.

Getting Buy-In for Experimentation Budget

This is where data storytelling has its highest-stakes application. Experimentation programs need budget — tools, headcount, agency support. Getting that budget requires convincing finance that experimentation is a revenue investment, not a marketing expense.

Here's how financial storytelling helped grow NRG's experimentation program, tools, and team:

Cumulative impact framing: Instead of presenting individual test results, I present cumulative projected annual revenue from all winning tests. "This quarter's winning tests project $1.2M in annual revenue" is a fundamentally different conversation than "Test #47 got a 12% lift." The first number gets attention from the CFO's team. The second number gets a polite nod.

Cost-per-test ROI: I calculate the fully loaded cost of running the experimentation program (tools, time, agency costs) and divide by projected revenue impact. When the ROI is 5x or 10x, the budget conversation becomes straightforward. "For every $1 we spend on experimentation, we project $8 in annual revenue." That's a language finance speaks.

The EBITDA connection: Revenue projections are good. EBITDA impact projections are better. When I can show that experimentation directly improves the metrics the CFO tracks — not just top-line revenue but contribution to operating profit — the program moves from "nice to have" to "strategic priority."

Opportunity cost of not testing: This is the argument that closes the deal. If we don't test, we're making decisions based on opinions. Those opinions have a cost — the revenue we leave on the table by shipping the wrong variant, the wrong copy, the wrong layout. I frame this not as a scare tactic but as a business reality: the alternative to testing isn't free. It's just invisible.

The Uncomfortable Truth About Data Storytelling

I'll end with something that most data storytelling content won't say: the best data storytelling sometimes means presenting findings that make people uncomfortable.

When a test championed by a senior VP loses, I have to present that. When the data shows that a beloved brand initiative isn't working, I have to surface that. When the numbers suggest we should invest less in a channel that someone's team depends on, I have to say it.

Data storytelling isn't just about making data beautiful or accessible. It's about being honest in a way that's constructive. That means framing a losing test as "we now know this approach doesn't work, which saves us from scaling a bad decision." It means presenting uncomfortable findings with empathy for the people who made the original decision. It means being right without being righteous.

That's the consultant mindset. Not just "here's what the data says" but "here's what the data says, and here's how we move forward together."

If you're struggling to get executive buy-in for your analytics or experimentation program, I'd be happy to talk through your approach. Reach me at [email protected]. The answer is almost never "better data." It's almost always "better storytelling."

Share this article
LinkedIn (opens in new tab) X / Twitter (opens in new tab)
Atticus Li

Leads applied experimentation at NRG Energy. $30M+ in verified revenue impact through behavioral economics and CRO.