Why We Test Instead of Assume

The following ten results come from across industries and business models. Each one violated a widely held assumption about what "should" work in digital optimization. Together, they make a compelling case for intellectual humility and rigorous experimentation.

Every result described here uses ranges and generalized contexts to protect proprietary data. The patterns, however, are real and replicable.

1. Removing the Hero Image Lifted Sign-ups

A subscription service replaced its large hero image with a text-heavy value proposition. The assumption was that the image created emotional connection and visual appeal.

The text-only variant produced a notable lift in sign-ups. The explanation: the hero image was beautiful but ambiguous. Users could not immediately understand what the product did. The text version communicated the value proposition in under two seconds. In utility-driven categories, clarity beats aesthetics every time.

2. Increasing Price Increased Conversions

A professional services firm tested a higher price point against their established pricing. The more expensive option converted at a higher rate.

This is the price-quality inference at work. In markets where quality is hard to evaluate before purchase (consulting, coaching, specialized software), price serves as a proxy for quality. The lower price actually signaled inferior quality to the target audience. The lesson: your price is not just a number. It is a positioning statement.

3. Adding a Decoy Plan Doubled Premium Upgrades

A SaaS company introduced a third pricing tier that was intentionally unattractive -- priced close to the premium tier but with significantly fewer features. Nobody was expected to choose it.

Premium plan selections roughly doubled. This is the asymmetric dominance effect (also called the decoy effect). When a clearly inferior option is placed near a superior one, the superior option becomes more attractive by comparison. The decoy tier made the premium plan look like an obvious bargain.

4. Hiding the Navigation Improved Conversion

A landing page test removed the global navigation bar. The assumption was that navigation provides reassurance and helps users explore.

Conversion improved meaningfully. The navigation was acting as an escape hatch. Users who were close to converting clicked a nav link instead, leaving the conversion funnel. On focused landing pages, removing navigation keeps attention on the single desired action.

5. Adding a Longer Video Beat the Short Version

A product page tested a ninety-second video against a thirty-second version. The industry assumption was that shorter content performs better because attention spans are declining.

The longer video won. It covered objections that the short version skipped. Users who watched the full video were more informed and more confident in their purchase decision. Short content works for awareness. Longer content works for conversion.

6. Generic Stock Photos Outperformed Custom Photography

A real estate platform tested professional, custom photography against standard stock images. The custom photos were objectively higher quality.

The stock images produced better engagement metrics. The explanation ties to processing fluency and the uncanny valley of commercial photography. The custom photos looked too polished, too commercial, too "advertising." The stock images, while generic, felt more familiar and less manipulative.

7. Removing Urgency Messaging Increased Sales

An online retailer tested removing its "Limited time offer" and countdown timer messaging. The assumption was that urgency drives action.

Sales increased when the urgency was removed. The urgency messaging had crossed the line from motivating to pressuring. Users felt manipulated rather than motivated. This is the reactance effect -- when people feel their freedom to choose is being threatened, they resist the very action they are being pushed toward.

8. A Slower Page Load Improved Engagement

A financial services company tested adding a brief loading animation (roughly two seconds) before displaying personalized results. The assumption was that faster is always better.

Users who saw the loading animation rated the results as more accurate and engaged more deeply with the content. This is the labor illusion -- when users see evidence that work is being done on their behalf, they value the output more. The animation communicated "we are calculating your personalized results" rather than "here is a pre-generated page."

9. Formal Copy Beat Casual Copy

A fintech product tested conversational, casual marketing copy against more formal, professional language. The assumption was that friendly, approachable copy builds rapport.

The formal version converted meaningfully better. In financial services, users want to feel that they are dealing with serious professionals, not friends. The casual copy undermined credibility. The authority principle trumped the likability principle in this context.

10. Showing Competitor Comparisons Increased Trust

A software company tested adding a comparison table that included competitor products (with factual feature comparisons) against a page that focused solely on their own product.

The comparison page converted better. Users appreciated the transparency and felt they could make an informed decision without leaving the site to research alternatives. This leveraged the transparency effect -- being openly honest about the competitive landscape builds trust more effectively than pretending competitors do not exist.

The Common Thread

All ten of these results share a common theme: the teams that discovered them were testing hypotheses, not implementing best practices. They were willing to challenge assumptions that the broader industry treated as settled.

The best experimentation programs operate with a fundamental humility about what they think they know. Every assumption is a testable hypothesis. Every best practice is a starting point, not a destination.

What This Means for Your Program

If you are building or running an experimentation program, these results suggest several actionable principles:

  • Test contrarian hypotheses regularly. Reserve a portion of your testing capacity for ideas that challenge conventional wisdom. These tests often produce the largest gains.
  • Understand the behavioral mechanism. When you get a surprising result, do not just celebrate it. Understand why it worked. The mechanism is what makes the insight transferable to other contexts.
  • Beware of pattern matching. Just because something worked for another company does not mean it will work for you. Context matters enormously.
  • Document your surprises. Build a knowledge base of results that defied expectations. Over time, this becomes your most valuable competitive asset.

Frequently Asked Questions

How do I get buy-in to test contrarian ideas?

Frame them as low-risk learning opportunities. The cost of running a test is minimal compared to the cost of a wrong assumption applied at scale. Position contrarian tests as investments in organizational knowledge.

Should I apply these specific results to my business?

Not directly. Each result was specific to a particular context, audience, and product. What you should take away is the methodology: test your assumptions, especially the ones that feel most obvious.

How often do contrarian hypotheses actually win?

In our experience, contrarian hypotheses win at roughly the same rate as conventional ones -- somewhere around one in three to one in five tests produce a statistically significant positive result. The difference is that contrarian wins tend to be larger in magnitude.

What is the best way to generate contrarian test ideas?

Start with your strongest beliefs about your users and ask "what if the opposite were true?" Review your past test results for patterns. Talk to your support and sales teams about counterintuitive customer behavior.

Share this article
LinkedIn (opens in new tab) X / Twitter (opens in new tab)
Written by Atticus Li

Revenue & experimentation leader — behavioral economics, CRO, and AI. CXL & Mindworx certified. $30M+ in verified impact.