A CTA's click rate is not its conversion contribution. Most CTA test reports show one and call it the other.
TL;DR
- The standard CTA test report shows aggregate clicks and aggregate conversions. That is enough to call a test "won" on the topline. It is not enough to know whether the win is real.
- The missing column is click-to-conversion ratio — the percentage of CTA clicks that complete the immediate next funnel step, computed per CTA placement.
- This single metric catches three failure modes that aggregate hides: wrong-intent clicks, friction injection at the destination, and cannibalization of higher-converting CTAs.
- Adding it to your test report takes one analytics query. Skipping it is how programs ship cosmetic wins for years.
The pattern
The two reports below describe the same test. They imply different decisions.
| What you'd see | Aggregate report (what most teams see) | Per-CTA breakdown (what actually happened) |
| ------------------------- | -------------------------------------- | -------------------------------------------------------------------- |
| Topline metric | +1.07% page-entry, +0.96% downstream | New CTA: 7,000 clicks → 426 page-entry (6%) |
| Existing CTAs | (not shown) | 5,000 clicks → 1,250 page-entry (25%) |
| What the data implies | Variant directionally positive — ship | Most "lift" is clicks redirected from positions converting 4× better |
| Decision | Ship the new CTA | Do not ship — funnel composition has degraded |
The aggregate is technically positive. The breakdown shows the lift is mostly cannibalization — clicks that were going to convert anyway, just routed through a less-efficient destination.
What the diagnostic actually catches
Three failure modes show up in CTA tests, and aggregate metrics hide all of them. Each maps to a specific signature in the per-placement breakdown.
| Failure mode | What aggregate shows | What per-CTA breakdown shows |
| ----------------------- | ---------------------------------- | ------------------------------------------------------------------------------------------------------ |
| Wrong-intent clicks | Headline lift on click volume | New CTA captures clicks from source pages where audience intent doesn't match destination |
| Friction injection | Click rate up, conversion flat | New CTA has sub-baseline click-to-conversion ratio because of modal/redirect/extra step at destination |
| Cannibalization | Topline lift smaller than expected | Existing CTAs lose click volume to the new CTA; total conversions roughly unchanged |
In every case, the aggregate report is technically correct. The headline number moved on the right side of zero. The team gets a "directional win." The funnel underneath is worse than before the test.
How to compute it
For any CTA test, pull three numbers per arm (control and variant):
- CTA clicks — tagged via your analytics, with a parameter for placement (page, position, copy variant). Most teams already track this; what they often don't have is the per-placement breakdown.
- Completions of the immediate next step — page entry events with a referrer matching the CTA, modal completion events, or step-1 completion of a multi-step flow.
- Click-to-conversion rate — completions divided by clicks, per arm.
The healthy ratio depends on what comes next. A direct-routing CTA should convert at 50–90%. A modal-mediated CTA: 20–50%. A redirect-through-third-party CTA: 10–30%. The numbers don't matter as much as the comparison between control and variant — and between the new CTA and the existing CTAs on the same page.
A worked example
Earlier this year a sitewide navigation CTA test came across my desk. The pattern: add a "Sign Up" button to the global navigation, opening a modal that prompted for a ZIP code before routing to plan selection. Modeled on a sister-brand pattern that had been live for years.
The aggregate result was directionally positive:
- Plan-selection page entry: +1.07%
- Downstream confirmation: +0.96%
- Sample size: 300k+ sessions per arm
Both within the noise floor of the test. Both on the right side of zero. Pre-test math had projected 3-5% lift; the actual was below the lower bound, but still positive on the topline.
The room wanted to ship it. The pattern matched a known winner. The numbers were directionally right.
I pulled the click-to-conversion ratio before signing off.
| Source page | Share of clicks | Click-to-page-entry rate | Audience intent |
| ----------------- | ----------------- | ------------------------ | ------------------------- |
| Support pages | 34.7% | ~5% | log in / pay bill (wrong) |
| Customer Homepage | 15.4% | ~5% | manage account (wrong) |
| Account portal | 12.4% | ~5% | account access (wrong) |
| Prospect Homepage | 17.5% | ~22% | shop for plan (right) |
| Total | ~7,000 clicks | ~6% aggregate | mixed |
47% of clicks came from wrong-intent pages — support and account-management surfaces where users were reading "Sign Up" as "log in to my account." The new CTA was three to five times less efficient than the existing CTAs at converting clicks. The +1.07% topline was hiding cannibalization plus wrong-intent click injection. Without the breakdown, this test would have been called a winner and shipped.
The decision was do not ship. Iteration backlog: copy change ("Sign Up" → "View Plans"), modal removal, source-page suppression on customer-account surfaces.
What to add to every CTA test report
Three new columns. They take one analytics query each. They are the difference between "we shipped a directional win" and "we shipped a real win we can defend."
| Column | Question it answers |
| ------------------------------------- | --------------------------------------------------------------------------- |
| Click-to-next-step rate (control) | Was the existing path efficient? |
| Click-to-next-step rate (variant) | Did the new CTA convert clicks at a healthy rate? |
| Per-placement breakdown | Are clicks coming from pages where audience intent matches the destination? |
If your test report doesn't have these three, the report is incomplete. If the answers reveal a sub-control click-to-conversion rate, a sub-5% placement, or topline lift larger than the per-CTA data supports, the win is cosmetic. Do not ship.
Healthy ratios by funnel position (rough benchmarks)
These are starting points, not targets. The right ratio depends on what the destination is.
| CTA type | Click-to-next-step rate |
| ------------------------------------ | ----------------------- |
| Hero CTA → content page (direct) | 70–90% |
| Hero CTA → form (single field) | 40–70% |
| Nav CTA → plan/pricing page (direct) | 60–80% |
| Nav CTA → modal with form | 20–50% |
| Inline CTA → checkout / purchase | 50–80% |
| Sticky mobile CTA → next funnel step | 80–95% |
| Promo banner → category page | 30–60% |
If your CTA converts clicks at less than half the typical rate for its position, the test is failing — even if the topline says won. Investigate the destination friction or the source-page intent mismatch first.
Why this happens (the behavioral mechanism)
A click and a conversion answer different questions. A click is a curiosity event — _what is this_. A conversion is a commitment event — _yes, I want this_. The two are correlated when the path between them is well-designed. The correlation breaks when one of three things is true:
- The destination friction is higher than the click promise implied (modal, redirect, extra form field).
- The audience clicking does not have the intent the destination requires (wrong source page, wrong copy framing).
- The new CTA captures clicks that would have come from a higher-converting path (cannibalization).
Most CTA tests focus on visibility — _is the CTA noticeable, is it tappable, is it clicked_. That is the easy half of the problem. The hard half is whether visibility translates to commitment. Click-to-conversion ratio is the metric that catches the hard half.
When to use this diagnostic
Every CTA test, before you sign off. The query takes ten minutes. Skipping it is how programs ship cosmetic wins until someone six months later asks "why did we add this CTA" and nobody can answer.
It matters most for:
- Sitewide CTAs (where source-page intent varies wildly across the site)
- CTAs with intermediate steps (modals, redirects, qualifying forms)
- New CTAs added to surfaces that already have CTAs (cannibalization risk)
- Tests with small topline lifts on large traffic (where aggregate noise easily covers underlying composition shifts)
For tests on isolated landing pages with a single CTA and direct routing, the diagnostic adds less value — there's only one path, one click, one destination. For everything else, it should be standard.
Bottom line
A CTA test report without click-to-conversion ratio per placement is incomplete. The cost of adding it is one analytics query. The cost of skipping it is years of cosmetic wins disguised as funnel growth. The teams that consistently grow conversion are the ones that demand the breakdown before signing off; the teams that ship more CTAs that "kind of worked" without explaining what worked are the ones that skipped the diagnostic.
Add the three columns to your test report template this week. Every CTA test from here on out gets the breakdown before it gets shipped.