The data analysts who deliver the most value in a company are not the ones producing the most reports. They are the ones whose work most often ends in a better decision being made. That shift — from reporting to decision-influence — is the single most important evolution a data analyst can make, and most never do it.
I came up through data analytics before moving into experimentation, and I can tell you the two disciplines are closer than most teams realize. Good experimentation is applied data analysis. Good data analysis — the kind that moves the business — has the same properties as good experimentation: a clear question, clean measurement, honest reporting, and an explicit translation to a decision.
The Two Modes of Data Analysis
There are two modes a data analyst can operate in, and they produce very different outcomes.
Reporting mode. You build dashboards, run queries, answer questions. Someone asks "what happened last week?" and you tell them. Someone asks "can you pull this metric?" and you pull it. The deliverable is the number. Success is measured in report volume and response time. This mode is necessary but limited — it keeps the business running but rarely changes it.
Decision mode. You look at the business, find the place where a better decision could be made, and work backward from there. You decide what data would actually inform that decision, pull it, analyze it, interpret it, and present it to the decision-maker in their language. The deliverable is the decision, not the number. Success is measured in how often your work actually changes what the business does.
Most analysts are stuck in reporting mode because the organization treats them as a request queue. The escape is not to refuse requests — that damages the relationship — but to do the decision-mode work alongside the reporting work, proving its value until the organization starts asking for it specifically.
Storytelling with Data: The Analyst's Real Job
"When stakeholders look at the data, it's the first time they're seeing it. If you dump a bunch of information on them, it's like opening a textbook for the first time and expecting them to take a test. Surface the most important things, guide them toward the decision." — Atticus Li
"Data-driven marketing isn't just about the data — it's about giving stakeholders data-backed insight to help them make really good decisions. You're a stakeholder too. You're using data explanations, charts, and storytelling to help convince people and guide them with your recommendations." — Atticus Li
Here is a metaphor that has stuck with me. When an analyst finishes a piece of work, they have spent hours or days building context. They understand the data's weirdness, the edge cases, the definitions, the caveats. By the time they present the work to a stakeholder, the analyst has this rich mental model of what is going on.
The stakeholder has none of that context. When they open the deck, it is the first time they have seen any of it. If the analyst dumps everything they know into the presentation — every chart, every caveat, every nuance — the stakeholder feels like they are being asked to take an exam they did not study for. They get overwhelmed and either push back defensively or nod along without really understanding.
The analyst's job is to do the synthesis work for the stakeholder, not to them. Surface the most important findings. Present the decision that the data supports. Explain the reasoning in plain language. Put the caveats in the appendix for anyone who wants to verify. This is not dumbing things down. It is respecting the stakeholder's cognitive budget.
The Data Dictionary Rule
"Always understand the data dictionary. There's always going to be weirdness with how things are catalogued or framed. The definitions might not be exactly what the name suggests. Every company has differences — you have to understand each one's." — Atticus Li
One of the first things I do when I start working with a new company's data is read the data dictionary. If they do not have one, I build one. This is the most unglamorous work in analytics, and it is also the most important, because almost every serious analytical mistake I have seen traces back to someone using a metric without understanding how it was defined.
Every company defines its metrics differently. A "unique user" in Google Analytics 4 is not the same as a "unique user" in Adobe Analytics. A "conversion" at one company might mean a completed purchase; at another, it might mean starting a checkout. A "session" might have a 30-minute inactivity cutoff or a 15-minute one. These definitional differences are invisible until they bite you, and then they bite hard.
The data dictionary rule is simple: before you analyze any metric, know exactly how it is defined and how it is instrumented. If you cannot answer "what counts as X and what does not count as X" without guessing, stop analyzing and go find the answer. This is not pedantry. It is the only thing standing between you and a confidently wrong conclusion.
This is especially true when you are comparing data across systems. If you are building a dashboard that combines Adobe Analytics, a CRM system, and a product database, each one has its own definitions and its own idiosyncrasies. The combined view is only as reliable as your understanding of how the pieces fit together.
Working With Experimentation Teams
Data analysts and experimentation teams are natural partners, and when the relationship works, both sides get dramatically better at their jobs. The analyst brings depth on the data infrastructure, segmentation, and historical patterns. The experimentation team brings discipline on hypothesis testing, causal inference, and statistical design.
Here are the ways the partnership adds value:
Pre-test work. Before a test launches, the analyst can help validate the metrics and tracking. Sample ratio mismatch, instrumentation gaps, and data pipeline issues are the kinds of problems an analyst will catch faster than an experimentation lead. Catching them before the test starts saves weeks of wasted traffic.
Segmentation analysis. The most interesting findings from an experiment often come from segmentation. Did the variant win for mobile users but lose for desktop? Did it work for new users but fail for returning ones? A good analyst can slice the test data in ways that transform a generic result into a specific strategy.
Historical context. An experimentation lead usually focuses on the test window. An analyst can provide context: how did this metric behave before the test, during the test, and after? Were there seasonal patterns that affected the result? Has a similar test been run before? The analyst's historical view keeps the experimentation team from confusing noise for signal.
Post-test validation. After a winning test ships, the analyst can monitor whether the lift persists in the long run. Novelty effects fade. Seasonal patterns distort. A 21% lift measured during a test window might be a 9% lift over a full year. The analyst's ongoing view is the only way to know which one you actually have.
Behavioral Economics and Analytics
One of the ways I think analysts can differentiate themselves is by combining analytical depth with behavioral economics fluency. The reason is simple: behavioral economics gives you hypotheses about why users behave the way they do. Analytics gives you the data to test those hypotheses. Either one alone is less useful than both together.
When I look at a funnel drop-off, I am not just asking "where are users falling off?" I am asking "what behavioral mechanism could explain this drop-off?" Is this a choice overload problem? A loss aversion problem? A trust gap? A cognitive load issue? Each of those hypotheses suggests different interventions to test, and the analyst who can articulate them earns a seat at the decision table that a pure numbers-focused analyst does not get.
This is not about being a behavioral economist. It is about reading the foundational books — Kahneman, Thaler, Cialdini — well enough to have working hypotheses about why users do what they do. The analytics career ceiling is much higher for analysts who can connect numbers to mechanisms than for analysts who only report numbers.
Presenting to Executives
The skill that separates senior analysts from junior ones, in my experience, is presenting to executives. Not the technical side. The communication side.
Executives have specific cognitive habits. They read the first slide and the last slide. They skim the middle. They want a clear recommendation, a clear risk, and a clear expected value. They are allergic to jargon and impatient with nuance that does not affect the decision.
Here is the structure that has worked for me:
Slide 1: The decision. "We should ship the variant" or "We should kill this initiative" or "We should reallocate budget from X to Y." No build-up. The recommendation is the first thing they see.
Slide 2: The expected value. "If we ship the variant, we project $X in annualized revenue with Y% confidence. Here is the range of outcomes." Dollars first, confidence second.
Slide 3: The risk. What could go wrong? What is the downside if the projection is too optimistic? How are we mitigating?
Slide 4: The evidence. This is where the chart lives. One clear chart that shows the pattern, with a one-sentence caption explaining what they are looking at. Not four charts. One.
Appendix: The methodology. For anyone who wants to verify the numbers, the full breakdown is here. Most executives will never open it. That is fine. Having it there earns trust.
This format is almost the opposite of how analysts are trained to think. Analysts want to show the work. Executives want the answer. The translation layer is the analyst's real deliverable.
FAQ
How do I push back on requests that feel like busy work?
Ask "what decision will this inform?" If the requester cannot answer, help them reframe. Sometimes the request is genuinely busy work and saying no is appropriate. Sometimes the request is a symptom of a more important underlying question the requester has not articulated. Your job is to find out which.
What skills should I prioritize learning as a data analyst?
SQL and analytical thinking are table stakes. Beyond those, prioritize: the business domain you work in, stakeholder communication, experimentation fundamentals, and at least enough behavioral economics to generate hypotheses. These skills compound.
How do I get taken seriously by senior leadership?
Start with the decisions, not the dashboards. Every time you bring a piece of analysis to leadership, lead with the recommendation and the expected value. Over time, they will start asking for your opinion before they ask for the numbers, which is the shift you want.
How do I handle cases where the data says one thing and leadership believes another?
Carefully. Do not frame it as the data being right and leadership being wrong. Frame it as "here is what the data shows. I am interpreting it as X. If I am missing context that would change the interpretation, I want to know." That opens a conversation instead of starting a fight. Sometimes leadership has context you are missing. Sometimes they do not, and the data changes their mind.
Become a Decision-Making Partner
If your work as a data analyst feels like an endless queue of report requests, the problem is not the queue. It is the positioning. The analysts who earn influence are the ones who bring decisions, not just numbers, to the table.
I built GrowthLayer specifically to bridge the gap between analytics and experimentation — a shared workspace where analysts and experimentation leads can collaborate on hypothesis testing without having to reconcile different tools and conventions.
If you are hiring analysts who can move from reporting to decision-influence, or building those skills yourself, explore open roles on Jobsolv.
Or book a consultation and I will help you develop the storytelling and decision-influence skills that turn an analyst into a strategic partner.