Your A/B test is only as good as the research behind it. I'll say that again because most people ignore it: garbage hypotheses produce garbage results, and hypotheses that aren't grounded in real research are almost always garbage.

The teams I've seen with consistently high win rates — 30%, 40%, sometimes higher — all share one trait. They spend more time on research than they do on testing. They don't guess what to test. They know, because multiple data sources pointed them toward the same problem.

Here are the six research methods that form the foundation of every high-impact experimentation program, and how to use each one without wasting time on theater.

Why You Need Multiple Methods

No single research method gives you the full picture. Analytics tells you where people drop off but not why. Surveys tell you what people say they want, which is often different from what they actually do. Heatmaps show you behavior but not intent.

The power comes from triangulation — when your analytics, your session recordings, and your customer surveys all point to the same friction point, you have a hypothesis worth testing. When only one data source suggests a problem, you have a lead worth investigating, not a test worth running.

This is the ResearchXL approach, and it forms Phase 1 of the testing process (/blog/posts/ab-testing-process-research-prioritize-test-analyze) I outlined in a previous article. Let me walk through each method.

Method 1: Heuristic Analysis

What It Is

A heuristic analysis is a structured expert review of your key pages and flows, evaluated against established conversion principles. It's not "look at the page and share your opinions." It's a systematic walkthrough using a specific framework.

The Five Factors

I evaluate every page against five criteria:

  • Relevancy — Does this page match what the visitor expected when they clicked? Does the headline align with the ad or link that brought them here?
  • Clarity — Is it immediately obvious what this page offers and what the visitor should do next? Or do they have to work to figure it out?
  • Value — Is the value proposition compelling? Does the visitor understand why they should choose this over alternatives?
  • Friction — What elements create hesitation, confusion, or extra work? Long forms, unclear pricing, missing trust signals?
  • Distraction — What's pulling attention away from the primary conversion action? Competing CTAs, irrelevant content, visual clutter?

Walk through each key page in your conversion funnel and score every factor. Document specific observations, not vague feelings. "The pricing page buries the comparison table below three paragraphs of marketing copy" is useful. "The pricing page could be better" is not.

The Important Caveat

Heuristic analysis is expert opinion, not data. Use it to generate hypotheses and identify areas for deeper investigation. Don't use it to validate decisions. An experienced optimizer will spot patterns a beginner will miss, but even experts are wrong regularly. That's why we test.

Method 2: Technical Analysis

The Low-Hanging Fruit

Before you optimize the experience, make sure the experience actually works. You'd be amazed how often I find conversion-killing technical issues that have been live for months.

What to Check

  • Cross-browser testing — Does your checkout work on Safari? Does your form render correctly on Firefox? Use tools like BrowserStack to test systematically, not just on whatever browser happens to be on your laptop.
  • Cross-device testing — Mobile is not just "desktop but smaller." Tap targets, form input behavior, scroll patterns — test all of them on actual devices.
  • Page speed — Every second of load time costs you conversions. Run PageSpeed Insights on your key pages. If your product page takes 4+ seconds on mobile, you found your first "fix it now" item.
  • JavaScript errors — Open Chrome DevTools on your key pages and check the console. Errors there often mean broken functionality for some segment of your users.

Technical fixes rarely need A/B testing. If your checkout is broken on iOS Safari and 30% of your traffic uses iOS Safari, fix it. This is your "just do it" bucket from the prioritization framework (/blog/posts/how-to-prioritize-ab-tests-pxl-framework).

Method 3: Web Analytics Analysis

Finding Where Problems Happen

Analytics is your quantitative foundation. It tells you where users struggle, even if it can't tell you why. The goal at this stage is to identify the pages and steps with the biggest problems.

What to Look For

  • Funnel drop-offs — Map your conversion funnel and find the steps with the highest abandonment. If 60% of users who add to cart abandon at the shipping step, that's where to focus.
  • Exit pages — Which pages are users leaving from? High exit rates on a product page mean something different than high exit rates on a confirmation page.
  • Bounce rates by segment — Overall bounce rate is nearly useless. Bounce rate by traffic source, by device, by landing page — that tells you something actionable.
  • Segment everything — Behavior differs dramatically by device (mobile vs desktop), traffic source (paid vs organic), and user type (new vs returning). An "average" view hides the real story.

Validate Your Tracking

Before you trust any of this data, verify that your tracking is actually working. I can't count the number of times I've seen teams making decisions based on broken GA4 implementations. Check your event tracking, verify your conversion goals fire correctly, and make sure your filters aren't excluding legitimate traffic.

Method 4: Mouse-Tracking Analysis

Seeing Behavior Directly

Heatmaps, scroll maps, click maps, and session recordings let you watch how users actually interact with your pages. This is where quantitative data meets observable behavior.

What to Look For

  • Rage clicks — Users clicking repeatedly on something that isn't interactive. This screams frustration and broken expectations.
  • Ignored CTAs — If your primary call-to-action gets fewer clicks than your navigation menu, you have a visibility or relevance problem.
  • Scroll depth — If only 20% of visitors scroll past your hero section, everything below it might as well not exist. Either the content above the fold fails to engage, or the page structure doesn't invite scrolling.
  • Form analytics — Which fields cause the most hesitation? Where do users abandon the form? Field-level analytics are gold for checkout and signup optimization.

The Warning

Don't get seduced by pretty visualizations. A heatmap showing lots of red on your hero image is interesting. But if you can't connect that observation to a business problem and a testable hypothesis, it's just a colorful screenshot. Always tie mouse-tracking insights back to the funnel analysis from your analytics data.

Method 5: Qualitative Research

Understanding the Why

Quantitative data tells you what's happening. Qualitative research tells you why. This is where you hear from actual humans about their experience, their hesitations, and what nearly stopped them from converting.

The Key Methods

  • On-site surveys — Short, targeted polls triggered at specific moments. "What almost stopped you from completing this purchase?" after checkout is far more valuable than a generic satisfaction survey.
  • Customer interviews — In-depth conversations with recent customers and people who didn't convert. Ask open-ended questions: "Walk me through your decision process." "What were you comparing us against?" "What almost made you leave?"
  • Post-purchase surveys — What brought them to your site? What convinced them to buy? What nearly stopped them?

How to Ask Good Questions

The quality of your qualitative data depends entirely on the quality of your questions. Two rules:

  1. Open-ended, not closed. "What was confusing about the pricing page?" gives you insight. "Was the pricing page confusing? (yes/no)" gives you a percentage.
  2. Non-leading. "What did you think about the checkout process?" is neutral. "Did you find the checkout process frustrating?" plants a suggestion.

Qualitative research is where you find the language your customers actually use — which often becomes the copy in your winning variations.

Method 6: User Testing

The Most Revealing Method

User testing — watching real people attempt tasks on your site while thinking aloud — is the single most revealing research method. It surfaces problems you'd never find in analytics because you see the confusion, the hesitation, and the workarounds in real time.

How to Do It Well

  • Think-aloud protocol — Ask participants to verbalize their thought process as they navigate. "I'm looking for the pricing... I see this button but I'm not sure if it'll show me prices or sign me up..."
  • Task-based scenarios — Give participants realistic tasks, not vague instructions. "You want to buy a pair of running shoes for under $120. Find a pair and get to checkout." Not "browse the site and tell me what you think."
  • Copy testing — Show users your key pages and ask what they understand, what's unclear, and what they'd expect to happen when they click. This catches messaging problems that behavioral data misses.
  • Recruit representative users — Five users who match your actual audience are worth more than fifty who don't. If your product serves enterprise buyers, don't test with college students.

What to Observe

Watch for confusion points (where they pause or look lost), hesitation (where they hover without clicking), workarounds (when they use the search bar because they couldn't find something in the navigation), and misunderstandings (when they expect a click to do something different from what happens).

Five to eight user tests will typically surface 80% or more of the major usability issues on a given flow. It's the most time-intensive method on this list, but also the one that produces the most actionable insights.

Synthesizing Research Into Hypotheses

The goal of all six methods is not to produce six separate reports. It's to triangulate — to find the problems that multiple methods point to independently.

When your analytics show a 45% drop-off at the shipping step, your session recordings show users hesitating at the shipping cost reveal, and your survey responses include "I was surprised by the shipping price" — you have a strong hypothesis. That's a problem worth testing a solution for.

The output of your research phase should be a prioritized list of problems, not a list of solutions. Solutions come when you sit down to write hypotheses and design test variations. First, make sure you're solving the right problems. Then bring that list to your prioritization framework (/blog/posts/how-to-prioritize-ab-tests-pxl-framework) to decide what to test first.

The New Analyst Mistake

Running one method and calling it "research." Or worse, doing no research at all and just testing whatever the HiPPO — the Highest Paid Person's Opinion — suggests.

I've watched teams spend an entire quarter testing the CEO's idea to add a video to the homepage while their checkout funnel leaked 70% of users at a single step that nobody bothered to investigate. Don't let organizational politics replace data.

If your test backlog is a list of opinions from stakeholders rather than evidence-backed hypotheses from research, you don't have a research problem. You have a process problem. Go read the testing process article (/blog/posts/ab-testing-process-research-prioritize-test-analyze) and fix your workflow.

Pro Tip

Start with analytics and heuristic analysis — they're the fastest and cheapest methods. You can do both in a day and walk away with a strong initial list of problem areas. Then layer in qualitative methods (surveys, interviews) to understand the "why" behind your quantitative findings. Save user testing for your biggest, most complex questions — it produces the deepest insights but also demands the most time and budget.

And when you're ready to act on what you found, check out how to turn research into test setups (/blog/posts/how-to-set-up-ab-test-hypothesis-implementation) and how to analyze the results (/blog/posts/how-to-analyze-ab-test-results-segmentation) once your tests complete.

Share this article
LinkedIn (opens in new tab) X / Twitter (opens in new tab)
Written by Atticus Li

Revenue & experimentation leader — behavioral economics, CRO, and AI. CXL & Mindworx certified. $30M+ in verified impact.