Why Your "Unique Visitors" Funnel Is Lying To You (And What To Do Instead)
When breakdown rows exceed total users, you're seeing overlapping populations, not a funnel. Here's why dashboards fail and how to fix it.
Metrics, measurement, and data frameworks for growth teams. Funnel analysis, cohort tracking, attribution models, and the analytics that prove experiment ROI to leadership.
48 articles
When breakdown rows exceed total users, you're seeing overlapping populations, not a funnel. Here's why dashboards fail and how to fix it.
Visitor-based vs session-based conversion counting, the exact math showing how it changes your reported rate, unique vs all conversions, how to audit your setup, and common bugs that inflate conversions.
The three Optimizely metric types explained for practitioners — when revenue per visitor beats revenue per purchase, the variance problem with revenue metrics, ratio metric gotchas, and practical setup examples.
Why you can only have one primary metric, how to choose it correctly, why revenue per visitor usually beats CVR alone, and how metric selection affects test duration and statistical validity.
"Conversion rate" means completely different things for an ecommerce site vs. SaaS vs. media company. Here's the right metric hierarchy for each revenue model, with Optimizely setup instructions and worked examples.
Optimizely and GA4 will never show identical numbers — and that's expected. This guide explains the 5 root causes of discrepancies, how to audit each one, and when a discrepancy signals a real problem vs. normal variance.
The top-line result is often a lie. This guide shows you how to segment Optimizely results correctly, which segments actually matter, and how to avoid the statistical traps that turn exploratory data into false conclusions.
A practitioner's guide to every element on the Optimizely results page — what it means, what to check first, and how to avoid the most common misreads that lead to bad decisions.
Learn how CUPED (Controlled Experiment Using Pre-Experiment Data) and other variance reduction techniques can cut your A/B test duration by 20-50% without sacrificing statistical rigor.
Learn why aggregate A/B test results hide the truth. Master segmentation analysis, understand heterogeneous treatment effects, and avoid the segment fishing trap.
Time-to-value is the hidden variable that determines whether users activate or abandon. Learn how to measure, optimize, and compress the gap between signup and first meaningful outcome.
Not all A/B tests use the same statistics. Learn which test to use for conversion rates, revenue, count data, and small samples — with a practical decision tree.
Traditional health scores track usage metrics. Behavioral health scores track the psychological patterns that actually predict whether a customer will stay or leave. Learn to build scoring models based on commitment signals, not vanity metrics.
By the time a user cancels, the decision was made weeks ago. This article explores how to build churn prediction models that read behavioral signals early enough to intervene, the difference between voluntary and involuntary churn indicators, and why intervention timing matters more than intervention content.
Explore how decision fatigue and ego depletion affect digital conversion rates throughout the day, and learn simplification strategies that design for cognitively depleted users.
How large language models solve the qualitative research bottleneck by enabling thematic analysis, nuanced sentiment detection, and synthesis of user interviews and surveys at speeds previously impossible without sacrificing interpretive depth.
Compare Bayesian and Frequentist approaches to A/B testing. Understand the practical differences, when each excels, and why the debate matters less than your fundamentals.
Learn how to properly analyze A/B test results beyond the dashboard green light. Master segmentation, effect size interpretation, and honest reporting that builds credibility.
The engagement metrics that actually predict conversion: scroll depth, interaction rate, and qualified sessions. Why bounce rate tells you almost nothing useful and what to replace it with.
How grouping users by acquisition date reveals retention, engagement, and revenue patterns invisible in aggregate data. A behavioral lens on why cohorts expose the truth that averages conceal.
Data discrepancies between platforms, the observer effect in measurement, and how to build a single source of truth when every tool tells a different story.
Mean reversion in marketing channels, the diminishing returns curve, and when to trust your model vs. your gut. Why the past is an increasingly unreliable guide to the future in growth.
Why tracking plans fail, how naming conventions compound, and the hidden cost of retrofitting analytics. The architectural decisions that determine whether your data is an asset or a liability.
You study users who entered the funnel but ignore those who never started, creating systematically wrong conclusions about where to invest optimization effort.
The fundamental measurement problem in digital marketing and why all models are wrong but some are useful. An exploration of attribution through the lens of behavioral economics and epistemology.
Why pageviews, followers, and time-on-page are seductive but misleading without context. A behavioral science framework for distinguishing metrics that inform decisions from metrics that merely comfort.
Demystify A/B testing statistics — p-values, confidence intervals, Type I and Type II errors, and one-tail vs two-tail tests explained in plain English with business context.
Learn why dashboard metrics alone can mislead your A/B test analysis. Discover how to verify results across multiple data sources, interpret inconclusive outcomes, and avoid premature winner declarations.
Discover how to uncover segment-level insights hidden within overall A/B test results. Learn which segments to analyze, minimum sample size requirements, and how to avoid the data dredging trap.
Understand what p-values really mean in A/B testing, why common interpretations are wrong, and how to use statistical significance correctly for business decisions.
Understand the difference between one-tailed and two-tailed hypothesis tests in A/B testing, when each is appropriate, and the simple conversion rule between them.
Learn how to interpret confidence intervals and margin of error in A/B test results, why your conversion rate is always an estimate with uncertainty, and how this affects decisions.
Heat maps and session replays are seductive but easy to misinterpret. Learn how to use click maps, scroll maps, and form analytics to generate real insights instead of pretty pictures.
Technical bugs and performance issues silently destroy conversion rates. Learn why cross-browser testing, site speed analysis, and QA are the most profitable and most overlooked areas of optimization.
Quantitative data tells you what is happening on your website. Qualitative research tells you why. Learn how surveys, interviews, and customer feedback generate the insights that drive winning experiments.
Dashboard design inadvertently reinforces confirmation bias by making favorable metrics prominent and burying contradictory signals. Understanding this cognitive trap is essential for teams that want data to drive decisions rather than validate them.
Much of what companies celebrate as customer loyalty is actually the sunk cost fallacy in action. Understanding the difference between genuine loyalty and escalation of commitment changes how we measure retention.
Why stated preferences diverge from revealed preferences. Explore how the Dunning-Kruger Effect distorts self-reported user research and what methods produce more reliable behavioral insights.
Stakeholder pressure in business strategy doesn't break your metric tree because people are unreasonable. It breaks because the tree isn't tied to a decision anyone is willing to defend. I've been in the room when revenue misses, the board wants answers, and every exec grabs the nearest metric to justify their plan.
Your best sales reps are already on your side. They are your happiest customers, chatting in Slack communities and WhatsApp groups about tools they like. A simple, low-friction startup referral program can turn that goodwill into a repeatable growth channel, even if you have zero growth hires and al
You know users are signing up, but only a slice sticks around. Somewhere between “Create account” and “Never churn again” sits your product aha moment. It is not a slogan in a deck. It is a specific action or set of actions in your product that sharply raises the odds of long-term retention and reve
A successful customer acquisition engine is a system built on evidence, not guesswork. It starts with a sharp understanding of your ideal customer and a value proposition you can test. This approach connects every marketing action to business outcomes—like user activation and revenue—from day one. B
Measuring product market fit requires more than tracking vanity metrics. It's about moving past gut feelings and using data to prove you've built a must-have solution for a specific market.
Calculating customer lifetime value (CLV) multiplies a customer's average purchase value by their purchase frequency, then by their average customer lifespan.
A practical framework for designing three-level metric trees tied to real decisions, financial outcomes, and guardrails that withstand stakeholder pressure.
How to define a smallest effect worth shipping (SEWS) using financial impact, a tight measurement window, and disciplined experiment design.
Why revenue per session beats conversion rate for experiment prioritization, and how to size bets before you run a test.
What holdout tests actually prove about incremental revenue, when to use them, and how to defend results under stakeholder pressure.