The Solution Addiction

Picture a typical growth team meeting. Someone says, "Let us test a new hero banner." Someone else suggests, "What about making the CTA button bigger?" A third person proposes, "We should test adding social proof above the fold."

Notice what is missing. Nobody asked what problem the hero banner is supposed to solve. Nobody identified why users are not clicking the existing CTA. Nobody checked whether the absence of social proof is actually what is preventing conversions.

This is solution-first testing. It starts with a change and works backward to justify it. It feels productive because you are shipping experiments. But it produces a win rate somewhere around one in five — and often lower.

Problem-first testing inverts the approach. It starts with a measurable user behavior problem, investigates the root cause, and only then designs an experiment to address it. This approach consistently produces win rates closer to two in five, sometimes higher.

The difference is not luck. It is the difference between treating symptoms and treating causes.

Why Solution-First Testing Fails

The Base Rate Problem

Most changes to a product do not meaningfully affect user behavior. This is not pessimism — it is math. The number of possible changes is enormous. The number of changes that address the specific friction point preventing a specific behavior is small.

When you test a solution without understanding the problem, you are sampling randomly from the enormous space of possible changes. Your probability of hitting a solution that addresses the actual bottleneck is low.

When you start with the problem, you narrow the solution space dramatically. You are no longer asking "does this random change help?" You are asking "does this specific intervention address this specific friction?"

The Cognitive Bias Trap

Solution-first testing is driven by cognitive biases that feel like insight:

Availability bias: "I saw a competitor do this, so it must work." What works for one product may not work for yours. The user context, market position, and product maturity are all different.

Anchoring: "Our conversion rate is low compared to the industry benchmark, so we need to change the conversion page." The benchmark is an average across products with different audiences, price points, and value propositions. It tells you nothing about what specifically needs to change.

Bandwagon effect: "Everyone is testing micro-copy on their CTAs, so we should too." Popular test categories are popular because they are easy to implement, not because they are high-impact.

Problem-first testing forces you past these biases by grounding every experiment in data about actual user behavior.

The Compound Learning Problem

Solution-first tests produce binary outcomes: the change won or it did not. When a solution-first test loses, you learn almost nothing. You know that one specific change did not work, but you do not know why, and you do not know what would work instead.

Problem-first tests produce learning regardless of outcome. If the experiment wins, you confirm that the identified problem was real and the intervention was effective. If it loses, you either learn that the problem was not what you thought (valuable) or that the intervention was wrong for the right problem (also valuable).

Over quarters, this difference in learning velocity is the real competitive advantage. Teams using problem-first testing develop an increasingly accurate model of their users. Teams using solution-first testing accumulate a list of things that did not work.

The Problem-First Process

Step 1: Identify the Behavior Gap

Start with data. Look for places where user behavior diverges from the desired path:

  • Funnel drop-off points with steep declines
  • Pages with high bounce rates relative to similar pages
  • Features with low adoption despite high visibility
  • User segments that convert at significantly lower rates than others

The behavior gap is the difference between what users are doing and what you want them to do. Quantify it. If the gap is small, the opportunity is small — move on to a bigger one.

Step 2: Investigate the Root Cause

This is the step that solution-first testing skips entirely. Before designing an experiment, understand why the behavior gap exists.

Investigation methods:

Quantitative analysis: Segment the data. Do all users drop off equally, or is it concentrated in specific segments? When do they drop off — at the top of the page, after scrolling, after a specific interaction?

Session recordings and heatmaps: Watch what users actually do. Where do they hesitate? What do they click on that is not clickable? Where do they scroll past content you expected them to read?

User interviews and surveys: Ask users what happened. "What were you looking for when you visited this page?" "What made you decide not to continue?" "What information would have helped you make a decision?"

Behavioral frameworks: Apply cognitive science to interpret the data. Is the drop-off caused by choice overload? Information scarcity? Unclear value proposition? Social proof deficit? Status quo bias?

The output of this step is a causal hypothesis: "Users drop off at this point because of this specific friction, which is caused by this specific aspect of the experience."

Step 3: Design the Intervention

Now — and only now — design the experiment. The treatment should directly address the identified root cause.

If the root cause is cognitive overload on a pricing page, the treatment might be simplifying the comparison or highlighting the recommended option. If the root cause is trust deficit, the treatment might be adding credibility signals. If the root cause is unclear value proposition, the treatment might be restructuring the messaging hierarchy.

Notice how the treatment follows logically from the diagnosis. This is not "let us try something and see." This is "we identified a specific problem and are testing a specific solution."

Step 4: Predict the Mechanism

Before launching, articulate the mechanism by which the treatment should affect behavior:

"By reducing the number of pricing tiers displayed from five to three, we expect to decrease choice overload. This should manifest as increased time on the pricing page (users engage rather than bounce) and increased click-through to the signup flow. We predict a measurable increase in pricing-page-to-trial conversion."

This prediction serves two purposes:

  1. It makes the hypothesis falsifiable — if the mechanism is wrong, the specific behavioral changes will not appear
  2. It creates a framework for interpreting the results — you are not just looking at whether the metric moved, you are checking whether it moved for the predicted reason

Step 5: Run and Learn

Run the experiment with proper statistical rigor. When the results come in, evaluate them against the predicted mechanism:

  • Did the metric move in the predicted direction?
  • Did the intermediate behaviors change as expected?
  • Does the pattern of results support the hypothesized mechanism?

If the answer to all three is yes, you have a validated understanding of user behavior that you can apply to future experiments. If the answer to any is no, you have a refined understanding of the problem that informs the next experiment.

Either way, you learned something useful. This is the fundamental advantage of problem-first testing.

Problem-First Testing in Practice

Example: Checkout Abandonment

Solution-first approach: "Let us test a simplified checkout page with fewer form fields."

Problem-first approach:

  1. Data shows high abandonment at the payment step specifically
  2. Session recordings reveal users hesitating at the total price — they scroll up and down comparing items
  3. User interviews reveal surprise at shipping costs that were not shown earlier
  4. Root cause: unexpected cost revelation at checkout triggers loss aversion
  5. Treatment: show estimated total including shipping on the cart page before checkout
  6. Mechanism: eliminating the surprise at checkout should reduce the loss aversion trigger and increase completion

The solution-first approach might work by accident if fewer fields happen to make the price less prominent. But it does not address the actual problem. The problem-first approach targets the root cause directly.

Example: Low Feature Adoption

Solution-first approach: "Let us add a tooltip that highlights the feature."

Problem-first approach:

  1. Data shows the feature is used by less than a fifth of active users despite being available to all
  2. Segmentation reveals that users who discover the feature through a specific workflow adopt it at high rates, while users who encounter it through navigation almost never try it
  3. The workflow contextualizes the feature — users understand why they need it. The navigation just shows it exists without context.
  4. Root cause: insufficient context about when and why the feature is valuable
  5. Treatment: trigger a contextual prompt when users perform the workflow that would benefit from the feature
  6. Mechanism: providing the feature in context, at the moment of need, should increase adoption because users immediately understand the value

A tooltip would not solve this problem because the issue is not awareness — it is relevance.

Transitioning From Solution-First to Problem-First

If your team is accustomed to solution-first testing, the transition takes time. Start with these changes:

Require a problem statement before any test idea is accepted into the backlog. The format: "Users are doing [current behavior] instead of [desired behavior], and data suggests this is because [root cause]."

Add an investigation step to your experiment workflow. Before any test can be designed, the team must present data supporting the root cause hypothesis.

Celebrate learning, not just wins. When a problem-first test loses, the team still gained validated knowledge about user behavior. Recognize this as progress.

Track your win rate over time. As the proportion of problem-first tests increases, the win rate should increase as well. This data builds the case for the approach.

FAQ

Does problem-first testing take longer than solution-first?

The investigation step adds one to two weeks before each experiment. But because problem-first tests have higher win rates and produce more learning, the total time to impact is usually shorter.

What if we do not have enough data to investigate root causes?

Start with qualitative methods — user interviews, session recordings, support ticket analysis. These are available to every team and often reveal root causes that quantitative data alone misses.

Can we still run quick tests based on intuition?

Yes. Reserve a portion of your testing capacity for exploratory tests. But track the win rates separately. Over time, the data will show which approach produces better results.

How do I convince stakeholders to invest in investigation before testing?

Frame it as risk reduction. Show them the current win rate and the cost of running losing tests. Then explain how investigation increases the probability of each test succeeding.

Share this article
LinkedIn (opens in new tab) X / Twitter (opens in new tab)
Written by Atticus Li

Revenue & experimentation leader — behavioral economics, CRO, and AI. CXL & Mindworx certified. $30M+ in verified impact.