Most CRO advice you read online is technically correct and practically useless. That is not a criticism of the people giving it. It is a criticism of the gap between where they work and where you work.

The biggest names in conversion optimization are running programs at companies with millions of monthly users. Sample size is never the bottleneck. Traffic is abundant. Political air cover comes from a CEO who already believes in experimentation. When someone with that context writes "always run tests to 95% statistical significance," they are not lying. They are just describing a world most practitioners do not live in.

"When you look at some of these CRO experts and influencers — the big names — they're running programs at places where they have millions of users. They don't have to worry about traffic issues. They don't have real-world constraints. That's very different from most companies." — Atticus Li

The Constraints That Never Get Written About

Every article about experimentation assumes a company that does not exist for most teams. Here is what actually constrains real programs:

Sample size constraints. Most pages on most sites do not get enough traffic to reach significance in under 6-8 weeks. That means by the time you get a clean result, the business has already moved on to a different priority.

Political constraints. A stakeholder wants to ship the variant they prefer regardless of the data. A marketing team runs a campaign mid-test that pollutes the comparison. A product manager refuses to implement a winning variant because it conflicts with their roadmap. None of these problems are in any statistics textbook.

Resource constraints. Your team has a CRO manager, maybe a developer, maybe a designer. You cannot run the same program as a 30-person experimentation org at a Fortune 50 company. You have to make trade-offs the influencers never discuss.

Knowledge constraints. Different stakeholders have different mental models of what testing is. Some think any A/B test is a "real" experiment. Some confuse a two-week campaign performance check with rigorous methodology. You spend half your time educating and half your time aligning.

Data constraints. Every company has its own data dictionary. A "unique user" in Adobe Analytics is not the same as a "unique user" in GA4. Metrics you took for granted at one company are defined differently at the next. Trust nothing about the numbers until you have read the instrumentation.

None of this shows up in the conference talks or the LinkedIn threads. But it is the entire job.

The Academic Rigor Trap

The most dangerous version of this problem is when someone trained in a rigorous academic background tries to run experimentation in a mid-market company.

I watched a UX researcher with an incredible methodological background struggle for months. She wanted to do everything by the book: multi-round user interviews, usability studies, post-test synthesis sessions, formal reporting. Stakeholders kept asking for research insights in a week. She needed a month. She kept refusing to compromise because the shortcuts she was being asked to take would invalidate the research.

The result was predictable. She could not deliver what stakeholders were asking for. And because she could not deliver, the research function lost credibility. Doing things absolutely perfectly ended up meaning not doing anything at all.

"To do things absolutely perfectly ends up meaning not doing anything at all. A lot of companies, a lot of marketers, a lot of people — they just don't have the traffic, the resources, the money, or the backing to do really rigorous testing. It's very different than working at Booking.com or Disney or Netflix." — Atticus Li

This is not a story about her. It is a story about how textbook methods break when they hit real-world constraints. The right answer was not to abandon rigor. It was to find a version of rigor that could survive the timeline. Lean research methods. Rapid prototyping. Smaller but more frequent study rounds. Partial findings delivered early, refined later.

Most companies do not need the best research method. They need the best research method that actually ships.

How to Read CRO Advice Without Getting Burned

When you read advice from big-name CRO practitioners, run it through three questions:

1. What is their traffic baseline? If they are optimizing a checkout flow that gets 2 million visitors a month, their advice about test duration and sample size is not portable to your 8,000-visitor-a-month page. Their statistical purity is a luxury you cannot afford.

2. What is their political context? If they have a CEO personally championing experimentation and a $500k tooling budget, their advice about stakeholder management is not portable to your team where you have to beg for a CRO platform license.

3. What trade-offs are they not mentioning? Every approach has trade-offs. If an article does not explicitly name them, the author is either hiding them or does not know they exist. Both are red flags. Real expertise sounds like "if you do this, you give up that."

I have read hundreds of CRO articles. The ones I trust are the ones that start with "here is where this breaks." Everything else is aspiration dressed up as advice.

A Framework for Constraint-Aware Experimentation

Here is how I approach programs that do not fit the influencer template:

Step 1: Map the real constraints explicitly.

Before you design your program, list every constraint you have: weekly sessions per key page, budget, team size, tooling, political support, stakeholder literacy. Write them down. These are the boundary conditions for every decision you will make.

Step 2: Calibrate ambition to traffic reality.

Use pre-test duration analysis to figure out what kinds of tests you can actually run. If your checkout page gets 3,000 weekly sessions and your baseline conversion is 4%, you cannot detect a 3% lift in under 10 weeks. Accept that. Prioritize tests where the minimum detectable effect (MDE) matches what you can actually measure.

Step 3: Standardize the minimum viable process.

You will not have time to do everything by the book. Decide which steps are non-negotiable — pre-test calculations, tracking validation, documented hypothesis, post-test analysis — and which can be pragmatic. Write it down so the team has a shared standard.

Step 4: Prioritize decisions over purity.

The goal of the program is to help the business make better decisions, not to produce dissertation-quality research. Every time you face a trade-off between rigor and speed, ask: will the business still have to make this decision either way? If yes, ship the directional answer and move on.

Step 5: Invest in stakeholder education as a line item.

Budget time explicitly for teaching stakeholders what experimentation actually does, what it does not do, and how to read results. This is not overhead. It is the work.

FAQ

Is there ever a point where you can ignore traffic constraints?

Only if you are at a company with enough volume to run clean tests on any important page. That is maybe 5% of companies. Everyone else has to work within constraints, and pretending otherwise leads to paralysis or bad science.

How do you convince a stakeholder that their favorite influencer is wrong?

You do not. You redirect the conversation to your actual constraints. "The advice in that article assumes 100k weekly sessions. We have 4k. Here is the test design that fits our traffic." Never criticize the influencer. Criticize the fit.

What if leadership expects Netflix-level rigor on a startup budget?

Educate them on the trade-off between rigor and speed using their own KPIs. Show them how long it takes to reach significance at current traffic for the lifts they care about. Let the math make the argument. If they still demand purity, they are choosing slowness. That is their call to make once it is explicit.

Are any popular CRO resources genuinely useful for small teams?

Yes. Look for content that explicitly discusses sample size constraints, directional evidence, and stakeholder politics. The good practitioners talk about these constantly. The content mill practitioners never do.

Build a Program That Fits Your Reality

If your experimentation program is struggling to deliver because the advice you are following was not built for your company's constraints, you do not need more advice. You need a framework calibrated to where you actually work.

I built GrowthLayer specifically to help teams operate under realistic constraints — with pre-test calculators that account for low traffic, a test repository that captures directional learnings, and workflows designed for teams without a 30-person experimentation org.

If you are hiring for roles that understand how to run programs under real constraints, or looking for one, explore open CRO and growth positions on Jobsolv.

Or book a consultation and I will help you design a program that matches the company you actually work at — not the one in the case studies.

Share this article
LinkedIn (opens in new tab) X / Twitter (opens in new tab)
Atticus Li

Leads applied experimentation at NRG Energy. $30M+ in verified revenue impact through behavioral economics and CRO.