Setting up Optimizely correctly in the first two weeks will save you months of bad data. Here's the setup sequence most teams skip.
Most Optimizely onboarding stories follow the same arc: someone installs the snippet, launches a test within 48 hours, and spends the next three months wondering why the results don't match Google Analytics, why the same experiment shows different numbers on different days, or why a "winning" test didn't move the needle after shipping. The setup was wrong from the start.
I've done this setup at multiple organizations, including a full deployment at NRG Energy where we eventually ran 100+ experiments and attributed $30M+ in revenue impact. Getting the foundation right is what makes the rest of the program trustworthy. Here's the sequence.
The Setup Sequence
Step 1: Install the Snippet Correctly
The Optimizely snippet needs to load synchronously in the <head> of every page you plan to test. This is non-negotiable.
Why synchronous matters: Optimizely activates experiments by modifying the DOM before it's painted. If the snippet loads asynchronously — or worse, at the bottom of the page — visitors will see the original version of the page for a fraction of a second before the variation loads. This is called flicker. It's jarring for users, it introduces measurement error, and it means your control group is contaminated with users who saw the variation briefly before it was applied.
Placement: Paste the snippet as the first script in <head>, before any CSS, before any analytics tags, before anything else.
Page speed tradeoff: Yes, a synchronous script in the <head> adds blocking time. Optimizely's snippet is typically 20–80KB depending on your configuration. On most sites the impact is under 100ms — acceptable for running experiments. If page speed is a hard constraint, work with your performance team to set a budget and monitor it. Don't use async as a workaround; it breaks the product.
Pro Tip: After installing, load your site in Chrome DevTools with the Network tab open and confirm the Optimizely snippet appears first in the waterfall, before your CSS files. If it doesn't, your implementation is wrong regardless of what the tag manager says.
Step 2: Run an A/A Test Before Any Real Experiment
Before you trust Optimizely data, you need to verify that the platform is working correctly on your site. The way to do this is an A/A test: two identical experiences, no changes, 50/50 split, run for two weeks.
A correctly functioning A/A test should show:
- No statistically significant difference between the two groups
- Conversion rates that match what you see in Google Analytics (within 10–15% variance, which is normal due to counting methodology differences)
- Consistent traffic allocation over time
If your A/A test shows a 30% lift on "variant" — with no changes — your tracking is broken. This is far better to discover now than after you've run six experiments and made shipping decisions based on the results.
Step 3: Set Up Custom Events Correctly
Custom events are the most common source of ongoing data quality problems in Optimizely. The errors made here compound over time — every test you run on a broken event is a test you can't trust.
The most common mistakes:
Using pageview events as your conversion metric. If your "conversion" is a pageview on the confirmation page, your conversion rate will be inflated every time someone refreshes the confirmation page, navigates back, or the page fires the event multiple times. Use a click event or a custom JavaScript event that fires exactly once per transaction.
Using one event for multiple actions. If you create an event called "button-click" and fire it from three different buttons across the site, your metric is meaningless. Every event should map to exactly one specific action.
Not specifying "unique conversions" when appropriate. Optimizely can count a conversion every time the event fires, or only once per visitor. For purchase events, you almost always want unique conversions. For engagement metrics like video plays, total conversions may be more relevant.
The naming convention that scales: Use verb-noun format for all events. Examples: clicked-checkout-cta, completed-purchase, submitted-lead-form, viewed-pricing-page. This makes your event library readable without documentation six months from now.
Pro Tip: Every time you create a new event, QA it in Optimizely's event inspector immediately. Confirm it fires exactly when expected, doesn't double-fire, and stops firing when the user navigates away. Don't launch a test using an event you haven't verified.
Step 4: Configure Default Metrics and Audiences
Before you launch your first real experiment, set up the reusable components that every test will draw on.
Metrics: Create your primary business metrics as reusable events: primary conversion (purchase, sign-up, lead form submit), revenue per visitor, and any secondary engagement metrics your team cares about. When these are set up correctly once, every future experiment references the same tracking — no re-implementing per test.
Audiences: Build your standard audience segments before you need them. Minimum recommended set:
- Mobile visitors (device type = mobile or tablet)
- New visitors (no prior sessions)
- Returning visitors
- Logged-in users (if applicable)
- Geographic segments if relevant to your business
The reason to build these before your first test: Optimizely only captures audience data from the point the audience is created. If you want to segment a test result by "new vs. returning visitors" and you didn't create that audience before the test launched, you can't add it retroactively.
Step 5: Establish Your Project Structure and Naming Conventions
The naming decisions you make now determine how maintainable your program is at 50+ experiments per year. I've inherited Optimizely accounts where every experiment was named "Test 1," "Test 2," "Homepage Test," and "Homepage Test FINAL." Auditing those accounts is a multi-week project.
Experiment naming convention: Use [TEAM]-[PAGE]-[ELEMENT]-[DATE]
Examples:
CRO-CHECKOUT-CTA-2026-04PM-HOMEPAGE-HERO-2026-03GROWTH-PRICING-LAYOUT-2026-05
This makes every experiment self-describing in the experiment list. You can filter by team, page, or date without opening each one.
Audience naming: Use descriptive full names: "Mobile-New-Visitors-US", "Logged-In-Enterprise-Accounts". Never abbreviations.
Event naming: Verb-noun as described above. Keep it lowercase and hyphenated: clicked-add-to-cart, not AddToCart, ATC, or btn-click-1.
Project structure: If you have multiple products or domains, use separate Optimizely projects — not separate campaigns within one project. Cross-domain contamination is a real problem when audience conditions bleed across properties.
The 5 Setup Mistakes That Create Months of Bad Data
I've seen all of these in the wild. Each one is fixable, but none of them are obvious until you know what to look for.
1. Installing the snippet at the bottom of the page
This causes flicker, which means your control group is contaminated. The fix is always to move the snippet to the top of <head>. Any tag manager configuration that loads the Optimizely snippet after page render is wrong.
2. Using pageview events as conversion metrics
A pageview-based conversion rate inflates every time a page reloads, a user navigates back, or the browser prefetches the URL. Use explicit click events or JavaScript custom events that fire exactly once per intentional user action.
3. Not setting up custom audiences before you need them
Optimizely can't retroactively apply audience data to past visits. If you realize mid-test that you want to see mobile-only results and you haven't created that audience, you have to wait until the next test. Build your standard audience library in week one.
4. Using the same event name for multiple different actions
"button-click" is not an event — it's a category. One event, one action. When you merge multiple actions into a single event, your metric becomes uninterpretable and you can't isolate what's actually driving the result.
5. Not QA-ing traffic allocation before launch
Before every experiment goes live, check the actual traffic allocation in Optimizely's debugger. Confirm visitors are being assigned to variations at the expected ratio. Confirm the experiment fires on the correct pages and not on others. The A/A test catches systemic issues; per-experiment QA catches configuration errors. Both are necessary.
Pro Tip: Create a pre-launch checklist and make it non-negotiable. Minimum items: snippet fires on target page, event fires correctly, traffic allocation confirmed in debugger, audience conditions verified, minimum runtime calculated, and stopping criteria documented. If any item isn't checked, the test doesn't launch.
What Comes Next
Once your setup is solid, the next step is understanding the statistical framework your tests run on — because how you read results depends entirely on what the numbers actually mean.
Start with the Optimizely Practitioner Toolkit index to navigate to the right next article based on where you are in your program.
Subscribe: Lean Experiments
I publish practitioner guides like this regularly — setup, statistics, program building, and the mistakes I've watched teams repeat for years. Subscribe to Lean Experiments, my weekly newsletter on revenue-focused experimentation. No fluff, no vendor content.