You open a dashboard. Top of funnel: around 10,000 users. Mid funnel: around 4,000. Final step: around 3,500.

Everything looks reasonable. Then you break it down by experience variant — and one row shows 300,000 unique visitors.

Nothing crashed. No error message. Your analytics tool insists the numbers are valid. You assume it's a data issue. It isn't. This is one of the most common and most expensive measurement mistakes in product analytics, and the reason it persists is that nothing about it feels wrong until you try to make a decision from it.

I've watched this exact failure kill launches, justify the wrong redesigns, and send teams chasing segments that don't actually exist. The tooling never flags it. The numbers look clean. And the conclusions are completely, confidently wrong.

The Assumption That Quietly Destroys Your Funnel

When analysts see a "Unique Visitors" metric, they assume three things:

  • Each person is counted once
  • Numbers should never exceed the total user pool
  • Breakdowns simply split the same population into smaller groups

If the totals don't reconcile, the assumption is that something is broken. It isn't. The numbers are mathematically correct — they just aren't answering the question you think they're answering.

What's Actually Happening

You're not looking at one population. You're looking at multiple overlapping populations, each counted independently.

When you break down by an event — like "experience shown" — each row counts unique visitors within that row. The rows are not mutually exclusive. The same user can appear in multiple rows, because the same user can experience multiple variants over their session history.

So you might see:

  • Row A: 8,000 users
  • Row B: 7,500 users
  • Row C: 6,000 users

Total users: 10,000. Sum of rows: 21,500. All of those numbers are technically correct. None of them represent a clean funnel. The analytics tool is telling the truth — but only to the narrow question of "how many unique users met this condition?"

The problem is that nobody asks that question. They ask "what fraction of my users saw this variant?" — which is a completely different question, and the one the tool can't answer without user-level deduplication you have to do yourself.

Why The Failure Is Structural, Not Statistical

Event scope versus user scope

Your dimension is tied to events — things that happened. Your metric is tied to users — the people who did them. When you combine them in a breakdown table, the tool evaluates each row independently and doesn't deduplicate across rows.

What you get is "unique users per condition," not "unique users overall." Those sound almost identical. They aren't. And until you retrain your eye to notice which one a table is actually showing, you'll keep making decisions on the wrong one.

Repeated exposure is invisible in a breakdown

A single user can trigger the same event many times by reloading the page, navigating back, changing inputs, or hitting multiple states. Each of those re-fires the event — and can requalify the user for a different row in your breakdown.

The table doesn't surface this. It shows the aggregate as if each row were a clean, untouched population. In reality, some percentage of your users have been counted in every single row, and you have no way of knowing that from the numbers themselves.

Breakdowns are not funnels

This is the conceptual error under all the others. A breakdown answers "who ever experienced this condition?" A funnel answers "who progressed step-by-step?" These are fundamentally different questions, and using one to answer the other creates conclusions that feel data-driven but are actually unrelated to reality.

Breakdowns size exposure. Funnels measure movement. You need both. You should never confuse them.

The Non-Obvious Insight

The real problem isn't inflated counts. It's false exclusivity.

Smart analysts assume rows represent distinct groups. In reality, rows represent overlapping exposure histories. This is why clean-looking tables keep leading to wrong decisions — and why "let me just double-check the numbers" almost never surfaces the mistake. The numbers are fine. The unit of analysis is broken.

Once you see this, you can't unsee it. And you start noticing how many decisions across your company have been made on top of tables with the same structural flaw.

How To Actually Measure This

Step 1: Define the funnel at the user level, not the event level

Don't start with events. Start with users:

  • Users who saw the experience
  • Users who completed step 1
  • Users who completed step 2

Each step must be deduplicated and sequential. The moment you leave either of those constraints behind, you're back in breakdown territory.

Step 2: Enforce order

Define progression explicitly: step A must happen before step B; step B must happen before step C. Without this, you're counting presence, not movement. A user who saw step 3 but never saw step 1 should not show up in your funnel as "step 3 reached." In a well-designed funnel, they show up as "didn't enter funnel."

Most tools support this. Most teams don't use it.

Step 3: Force mutual exclusivity when analyzing variants

If you're comparing variants, do not segment like this:

  • Users who saw Deposit
  • Users who saw Prepaid

Those two groups overlap. Instead, segment like this:

  • Users who saw Deposit only
  • Users who saw Prepaid only
  • Users who saw both

You'll often find that the "both" segment is the biggest one — and that the real question isn't which variant converts better, but what happens when users cycle through them.

Step 4: Treat display events as exposure, not volume

Don't use raw event counts to size impact. Convert to "users exposed at least once" and ignore repeat fires. If an event can fire on component render instead of true first exposure, your entire volume baseline is wrong — and every dashboard derived from it will be wrong too. Audit the firing logic before you trust any exposure metric.

A Realistic Worked Example

A flow shows credit options. The raw data says the event fired 400,000 times across 12,000 users.

The breakdown looks like this: Deposit fires 320,000 times. Prepaid fires 90,000 times.

Most teams read this and conclude "Deposit dominates user exposure." They then optimize the Deposit path and deprioritize Prepaid.

Reality, once you switch to a user-level analysis: 9,000 users saw Deposit at some point. 7,500 saw Prepaid. 6,500 saw both.

The correct interpretation: most users actually saw both options. Deposit isn't dominant at all — it just fires more often per user because of how the UI cycles through states. Users are evaluating both options before deciding.

The decision flips completely. You're not optimizing Deposit anymore. You're optimizing the choice moment where users are comparing both. That's a completely different product problem, and you would never have seen it from the breakdown table.

Failure Modes to Watch For

  • Treating breakdown rows as distinct user groups when they're actually overlapping exposure histories.
  • Assuming a "Unique Visitors" metric guarantees global deduplication — it only guarantees deduplication within a row.
  • Using event counts to estimate user exposure when your event fires multiple times per session.
  • Ignoring repeat event firing from UI state changes, reloads, and back-navigation.
  • Analyzing funnel steps without enforcing order, so users show up at step 3 without ever completing step 1.
  • Optimizing UI based on exposure volume instead of the actual conversion path.
  • Trusting any table where "unspecified" dominates the results. If half your data is unspecified, the model is broken, not the dashboard.

Decision Rules

Use these as the operating rules when anyone hands you a funnel analysis.

If a breakdown sum exceeds total users, treat the rows as overlapping, not additive. This is the single fastest sanity check, and it catches most of these errors within ten seconds.

If your dimension is event-based, assume duplication until proven otherwise. The default is not deduplication. The default is overlap.

If users can experience multiple variants, enforce mutually exclusive segments. Saw A only, saw B only, saw both. Three rows, not two.

If an event can fire more than once per session, do not use it for sizing impact. Either convert to "users exposed at least once" or use a different event.

If you are analyzing progression, do not use breakdown tables. Use a proper funnel with enforced step order.

If "unspecified" dominates a breakdown, the data model is invalid for analysis. Stop, fix the instrumentation, come back later. Building on top of unspecified data is a trap.

When Not To Apply These Rules

When the dimension is user-level by design — one value per user — breakdowns can be safely additive and mutually exclusive. And when you're explicitly analyzing overlap rather than funnel progression, breakdown tables are the right tool. Just label them clearly so nobody misreads them as a funnel later.

The Tradeoffs You Don't See Until They Break You

Flexibility versus interpretability

Flexible tools let you slice data any way you want — but they don't protect you from invalid logic. You gain speed. You lose guarantees of correctness. For early exploration, flexibility wins. For final decisions, interpretability has to win.

Event-level detail versus user-level truth

Event data captures everything, which feels like a superpower, until you realize it exaggerates exposure and hides repetition. You gain granularity, you lose clarity on what users actually experienced. Most dashboards default to event-level because it's easier to instrument. That's a choice, not a law of nature.

Simplicity versus accuracy

Simple tables look clean — and clean tables get trusted. But simple tables often hide duplication behind a single row total. You gain readability, you lose decision accuracy. Clean-looking data is the most dangerous kind when the unit of analysis is wrong.

Three Hidden Assumptions That Break Everything

"One event equals one exposure." False. Events often fire multiple times per user due to re-renders, reloads, and navigation. When this breaks, your exposure is inflated and your funnel sizing is wrong.

"Rows represent distinct users." False. Rows represent conditions, not people. When this breaks, you double-count users and segment comparisons become meaningless.

"Funnel steps are inherently sequential." False. Without enforcement, users can appear at step 3 without ever hitting step 1. When this breaks, your conversion rates become invalid in both directions — some users get counted as progressing who never entered the funnel, and some real completers get missed because the order check fails.

The Thing That Actually Matters

Most dashboards fail not because the data is wrong, but because the unit of analysis is inconsistent. You're mixing events (what happened) with users (who did it). Until everything aligns to a single unit — and that unit is almost always the user — your funnel will look coherent and still be wrong.

This is the part that takes years to internalize. You don't debug a dashboard by checking the numbers. You debug it by checking the unit of analysis. Once the unit is right, the numbers take care of themselves.

What You're Probably Not Seeing

The root cause is usually instrumentation, not analysis. If your events fire on component render instead of on true exposure, everything downstream is distorted — and no amount of dashboard engineering will fix that. The fix lives in the tracking code, not the report.

You're probably also optimizing visible UI changes while the real constraint is backend decisioning or eligibility logic. When users cycle through multiple variants in a session, the actual lever is whatever is choosing which variant to show them, not the variant itself. That lever usually lives two or three layers deeper than the page you're A/B testing.

And your organization probably standardizes reporting formats that encourage these mistakes. Which means bad decisions aren't individual failures — they're systemic. Fixing dashboards without fixing event design just makes incorrect data easier to trust.

The 60-Second Move

Take whatever funnel you were about to act on. Rewrite it using only deduplicated users at each step, with explicit step order. Ignore every breakdown table. If the numbers change, your original funnel was lying to you — and the new one is the decision you should actually be making.

Do this every single time before a meeting where the funnel is going to drive a decision. It takes a minute. It will save you weeks of building the wrong thing.

FAQ

Why can "unique users" exceed totals in breakdowns? Because each row counts users independently, and users can qualify for multiple rows simultaneously. The tool isn't broken — it's answering a different question than the one you meant to ask.

When are breakdown tables safe to use? When you're analyzing overlap or distribution, not sequential funnels. And when the dimension is user-level by design (one value per user), so the rows really are mutually exclusive.

What's the fastest sanity check? Add up the row-level users. If the total exceeds overall users, you're looking at overlapping populations, not a funnel. This catches about 80% of the bad tables in ten seconds flat.

Share this article
LinkedIn (opens in new tab) X / Twitter (opens in new tab)
Atticus Li

Leads applied experimentation at NRG Energy. $30M+ in verified revenue impact through behavioral economics and CRO.