I've seen experiments where two analysts look at the same Optimizely results and get different conversion rates. Not because they made a math error — because they had different assumptions about what "conversion rate" means in the context of that experiment.
How Optimizely counts conversions is not obvious, and getting it wrong can make a neutral test look like a winner or make a genuine improvement look flat. Here's exactly how it works.
Visitor-Based vs Session-Based Counting: The Exact Difference
In Optimizely Web Experimentation, conversion rate is calculated at the visitor level by default.
Visitor-based counting:
- Each unique visitor is counted once in the denominator
- If a visitor converts multiple times (e.g., makes 3 purchases in different sessions), only their first conversion (or unique conversion status) counts toward the CVR
- CVR = unique converting visitors / unique visitors exposed to variation
Session-based counting (less common in Optimizely Web, more common in Feature Experimentation):
- Each session is counted separately
- The same visitor in 3 sessions counts as 3 sessions in the denominator
- A visitor who converts in 2 of 3 sessions has a session CVR of 67%
The difference matters because returning visitors behave differently than new visitors. If your test runs over multiple weeks, the same visitor might return several times. Visitor-based counting normalizes this — each person counts once. Session-based counting weights heavy returners more heavily.
**Pro Tip:** Optimizely Web Experimentation uses visitor-based counting as the default and standard. If you're comparing Optimizely results to Google Analytics (which is session-based by default), expect different numbers even with identical setup. A "3.2% conversion rate" in Optimizely and a "2.8% conversion rate" in GA4 for the same experiment is normal — not a tracking discrepancy.
The Math: How Counting Method Changes Your Reported Rate
Let's make this concrete with a worked example.
Scenario: 1,000 unique visitors see a test over 2 weeks. 200 visitors make at least one purchase. Among those 200, 40 visit again and make a second purchase. Total sessions across all visitors: 1,400 (some visitors return).
Visitor-based CVR:
- Denominator: 1,000 unique visitors
- Numerator: 200 visitors who purchased at least once
- CVR: 200/1,000 = 20.0%
Session-based CVR:
- Denominator: 1,400 sessions
- Numerator: 200 first-purchase sessions + 40 second-purchase sessions = 240 converting sessions
- CVR: 240/1,400 = 17.1%
Same experiment, same underlying behavior, different reported conversion rate. 20.0% vs 17.1% — nearly 3 percentage points apart.
Now imagine you're comparing a variant against control, and you don't realize that returning visitors are distributed unevenly between the two groups (which can happen by chance in smaller experiments). The counting methodology interacts with that imbalance and distorts your results.
**Pro Tip:** The visitor is the right unit of analysis for most experiments because you're making decisions about what experience to show a person, not a session. A person who visits 10 times and never buys is a non-converter — they shouldn't count as 10 non-conversions. Use visitor-based counting as your default.
Unique Conversions vs All Conversions: Which to Use When
Within visitor-based counting, there's still a choice about how to handle multiple conversions from the same visitor.
Unique conversions: a visitor counts as converted once, no matter how many times they trigger the conversion event.
Use this for: yes/no conversion questions. "Did this visitor ever add to cart?" "Did this visitor ever complete checkout?" CVR measured this way is a proportion of people.
All conversions (total events): every conversion event counts, even from the same visitor.
Use this for: volume or frequency questions. "How many add-to-cart events happened per visitor?" "How many form submissions?" This is closer to a numeric metric than a pure conversion metric.
Optimizely's default behavior: for conversion metrics (binary events), the rate is calculated based on unique converters. For numeric/revenue metrics, all events contribute to the total value.
The mix-up that causes problems: using an "all events" metric when you want a "unique converter" rate. If a visitor hits your order confirmation page 5 times (due to refreshes, back-button navigation, or a bug), and you're using an all-events metric, they count 5 conversions instead of 1. Your reported CVR can exceed 100% in degenerate cases.
**Pro Tip:** For page-reach metrics tied to confirmation pages, implement deduplication at the event level. Either use server-side event firing (which you can control) or check for a session flag before calling the Optimizely event push. Accidental double-fires on confirmation pages are one of the most common sources of inflated conversion rates.
How Optimizely Handles Users Who See Multiple Variations
Optimizely is designed to show a visitor the same variation every time they encounter the experiment, within the experiment's active window. This is handled by a cookie (or localStorage) that stores the visitor's variation assignment.
The mechanics:
- First visit: visitor is bucketed and assigned to a variation. Assignment stored in the Optimizely bucketing cookie.
- Return visit on same device/browser: cookie is read, same variation served. No re-bucketing.
- Different device: new cookie, potentially different variation assignment.
The cross-device contamination problem: a visitor who interacts with your experiment on mobile and desktop might be counted as two separate visitors. In Optimizely's standard Web Experimentation, there's no cross-device identity resolution by default. You'd need to pass a user ID and enable user-level bucketing to handle this.
For most experiments, cross-device contamination is a minor issue (maybe 5-10% of your users). For experiments where you're specifically testing behavior across a session that involves device switching (e.g., "research on mobile, purchase on desktop"), it's a significant validity concern.
**Pro Tip:** Check your analytics data before a major experiment to understand what percentage of your converting visitors are cross-device shoppers. If it's above 15%, consider whether standard cookie-based bucketing is sufficient for your analysis, or whether you need to build user ID-based bucketing.
The Recency Bias in Conversion Windows
Optimizely assigns visitors to experiments on first exposure and then tracks their conversions for the duration of the experiment window. This creates a recency bias:
- A visitor bucketed on Day 1 of a 30-day experiment has 30 days to convert
- A visitor bucketed on Day 28 has only 2 days to convert
This means late-entering visitors are systematically underrepresented in conversions. Your reported CVR at the end of the experiment is not the "true" CVR for all visitors — it's an underestimate for visitors who entered near the end.
Practical implication: don't stop experiments too early. The visitors bucketed in the last week haven't had time to complete their purchase cycle. If your typical purchase consideration cycle is 3-5 days, you need at least that much buffer after you stop seeing new traffic enter before making a final call.
The novelty effect: a related bias in the other direction. Visitors who entered early might be converting at higher rates simply because the new experience was novel. As the experiment matures and the novelty wears off, the conversion lift may decrease. Running experiments for at least 2 business cycle periods (usually 2 weeks minimum) helps average this out.
The "Once Per Session" vs "Every Time" Event Firing Options
When you implement custom events in Optimizely, you control how often those events fire. The two common patterns:
Fire every time: every occurrence of the action triggers the event push. Useful for things like "add to cart" (a user might add multiple items), page views, or video plays.
Fire once per session: use a session flag to prevent double-firing.
For purchase events, almost always fire once per session. Confirmation pages can be refreshed, users can navigate back. Without a session guard, a single purchase can generate multiple conversion events.
Use sessionStorage (not localStorage) for purchase deduplication flags. sessionStorage clears when the browser tab is closed, so a genuine new purchase in a new session will still fire correctly. localStorage persists indefinitely and could prevent legitimate second purchases from being recorded in future sessions.
**Pro Tip:** The confirmation page is the most dangerous place for double-firing conversion events. Before going live, test what happens when a user: (1) refreshes the confirmation page, (2) uses the back button and then navigates forward again, (3) bookmarks and revisits the confirmation URL. Each of these is a common double-fire scenario.
How to Audit Your Conversion Setup
If you suspect your conversions are being counted incorrectly, here's the diagnostic process:
Step 1: Check the Events page in Optimizely Go to Settings > Events and verify each event has the correct event type, is attached to the right metric, and has appropriate deduplication settings.
Step 2: Verify event firing in browser DevTools
- Open DevTools > Network
- Filter for
logx.optimizely.comorp13nlog.dz.optimizely.comrequests - Trigger your conversion event
- You should see exactly one network request per conversion (not two or three)
Step 3: Cross-reference with your source of truth Pull purchase counts from your order management system for the experiment window. Compare to the conversion count in Optimizely. Expect a slight discrepancy (5-10% is normal due to ad blockers, JavaScript errors, and timing). More than 15% discrepancy warrants investigation.
Step 4: Check for double-fire scenarios If your conversion rate looks implausibly high (e.g., a 40% CVR on a site with historical 3% CVR), look for confirmation page refreshes, back-button navigation, and AJAX-based submission handlers that might be firing multiple times.
**Pro Tip:** Test your conversion tracking setup by making a real test purchase (or triggering your conversion event) in a browser with Optimizely's Chrome extension open. The extension will show you in real time whether the event was captured. Do this before every experiment launch — don't assume tracking that worked last month is still working correctly.
Common Bugs That Inflate Conversions
Confirmation page refresh double-fires. User refreshes the order confirmation page, event fires again. Fix: session storage deduplication flag.
Back-button navigation re-fires. User clicks "back" from a thank-you page to the checkout form, then submits again. If your form submission event fires on the POST and the confirmation page, you may get two events. Fix: fire only on confirmation page, deduplicated.
AJAX form submission plus page navigation both trigger. A form submission handler fires an event, then the page navigates to a confirmation URL that also triggers an event. Fix: choose one trigger point, not both.
The SPA re-render bug. In single-page applications, navigating between "pages" (URL changes without full page loads) can re-initialize Optimizely and re-fire pageview-based conversion events. Fix: use route change detection to check whether the conversion should fire or has already fired.
Testing environment events bleeding into production. Optimizely events fired during QA testing can appear in results if the QA was done with real experiment IDs on production. Fix: use a separate Optimizely project for staging/QA, or use URL whitelisting to prevent production experiment activation on staging URLs.
The Google Tag Manager "All Pages" trigger. If you deploy your Optimizely conversion event via GTM and accidentally attach it to an "All Pages" trigger instead of a specific page or interaction trigger, it fires on every page load. Fix: audit your GTM triggers.
What to Do Next
- Run a conversion accuracy audit on your top 3 experiments right now — compare Optimizely conversion counts against your source-of-truth (order system, CRM, analytics) for the same time window. If the discrepancy is above 15%, investigate before trusting any results from those experiments.
- Add session storage deduplication to any purchase confirmation event that's currently firing based on page load. This is a 5-line code change that eliminates the most common inflation bug.
- Verify your counting methodology matches your stakeholder expectations — make sure when you report "3.2% conversion rate," your audience understands that's visitor-based (each person counted once), not session-based (each visit counted separately).
- Create a pre-launch tracking checklist that includes: event fires exactly once per conversion, event passes correct revenue value, event is not firing in staging, DevTools confirms one network request per conversion event. Run this check before every experiment launch.