I need to talk about a test that failed spectacularly -- and why I think it's more valuable than most wins I've shipped this year.
The hypothesis was elegant: show customers all available pricing tiers directly on the product card, make the lowest price the most prominent, and watch conversion rates climb. The logic felt bulletproof. Give people more information upfront, lead with your best value, and remove friction from their decision-making process.
Conversion dropped by 5-10%. Over the 30-day test period, the projected revenue impact was a six-figure loss. Not a rounding error. A real, significant, costly miss.
Here's what went wrong -- and what it reveals about how humans actually process pricing information.
The Experiment Setup
A major energy services provider was running a fairly standard acquisition funnel: landing page with plan cards, each showing a single headline price, leading to an enrollment flow. The team wanted to increase transparency by surfacing all three pricing tiers (based on usage levels) directly on the plan card, with the lowest price given visual prominence.
The sample was robust -- nearly 15,000 visitors per variation over 33 days. The primary metric was enrollment starts. This was not a test with ambiguous results; it reached statistical significance decisively. The variation lost.
Why "More Information" Is Not Always Better
This is where most optimization teams get the post-mortem wrong. They'll say "the design was cluttered" or "we needed better hierarchy." Those are surface-level diagnoses. The real issue is cognitive -- and it's well-documented in behavioral economics.
Barry Schwartz's Paradox of Choice isn't just a TED talk title. It's a measurable phenomenon. When you present people with multiple price points for what they perceive as a single product, you're not helping them make a decision. You're creating a new decision they didn't ask to make: "Which usage tier am I?"
This is the critical mistake. The customer came to the page to answer one question: "Which plan do I want?" By showing three prices per plan, we forced them to answer a second question first: "How much energy will I use?" That's a question most residential customers genuinely cannot answer with confidence. And when people can't answer a question confidently, they don't guess -- they leave.
The Anchoring Problem Nobody Anticipated
There's a second behavioral mechanism at play that I think the team missed entirely: anchoring gone wrong.
The hypothesis assumed that leading with the lowest price would anchor customers to the most attractive number. In theory, this follows Tversky and Kahneman's anchoring heuristic. But anchoring works when the anchor is the only number in the frame. When you show three prices simultaneously, you don't create one anchor -- you create uncertainty about which price is "real."
Customers likely saw the lowest price, felt attracted to it, then noticed the higher prices and immediately thought: "But I'll probably end up paying this one." The lowest price becomes a bait-and-switch signal rather than a value signal. This triggers what behavioral economists call "betrayal aversion" -- a heightened sensitivity to feeling misled that's even stronger than standard loss aversion.
What the Control Got Right (Accidentally)
The control -- a single price per plan card -- was doing something psychologically elegant without anyone realizing it. It was reducing the decision to its simplest form: "Do I want this plan at this price, yes or no?"
This is what I call the "binary frame advantage." When you present a single price, the customer's cognitive load is minimal. They're making a go/no-go decision. The moment you add price variability within a single option, you've transformed a Type 1 (fast, intuitive) decision into a Type 2 (slow, deliberative) decision. And in digital commerce, Type 2 thinking is the enemy of conversion.
Daniel Kahneman would call this an unnecessary activation of System 2 processing. I call it the most common mistake in pricing page design.
The Transparency Trap
I see this pattern constantly in my consulting work: well-intentioned teams conflate transparency with conversion optimization. They assume that giving customers all available information is inherently good. It's not. It's neutral at best, and often harmful.
Transparency is a trust-building strategy. It belongs in FAQs, terms pages, and comparison tools where the customer has self-selected into a deeper information-gathering mode. On a product card -- the very top of the consideration funnel -- your job is clarity, not completeness.
There's a meaningful distinction between these two things, and most optimization programs I audit don't make it. Clarity means giving people exactly the information they need to take the next step. Completeness means giving people all available information and hoping they'll sort it out themselves. The first respects the customer's cognitive bandwidth. The second ignores it.
How to Apply This to Your Own Pricing Tests
If you're running a pricing display test, here's the framework I now use with every client:
First, count the decisions. Before you ship a variation, map every discrete decision a customer must make on the page. If your variation adds even one new decision, you need a very strong reason to believe the benefit outweighs the cognitive cost.
Second, test the anchor in isolation. If you want to lead with a low price, test that as the single displayed price -- don't show it alongside higher alternatives. The anchor only works when it stands alone.
Third, respect the funnel stage. Product cards and landing pages are decision-simplification zones. Save the detailed pricing breakdowns for deeper in the funnel where the customer has already signaled purchase intent.
Fourth, watch for the "helpful harm" pattern. Whenever a stakeholder says "let's give users more information," that's your cue to ask: "Will this make their next action clearer, or just make them feel more informed?" These are different outcomes, and only the first one converts.
The Uncomfortable Truth About "Best Practices"
This experiment is a perfect case study in why I'm skeptical of pricing "best practices" that circulate in the CRO community. "Show your best price prominently" sounds great in a blog post. It sounds less great when it costs you six figures in a month.
The lesson isn't that price transparency is bad. The lesson is that every piece of information you add to a decision context has a cognitive cost, and that cost compounds in ways that are invisible until you run the test.
This is why I keep saying: the most dangerous thing in experimentation isn't a failed test. It's the "obvious" improvement that never gets tested because everyone assumed it would work. At least this team tested it. Most would have just shipped it and never known they were bleeding conversions.
What Happened Next
The team reverted to the control and ran a follow-up experiment based on the learning. Instead of showing all prices, they tested a single "starting at" price with a subtle link to "see pricing details" -- giving transparency-seeking customers a self-service path without burdening everyone else with decision complexity.
That test is still running as of this writing, but early signals are promising. The key insight wasn't about pricing display mechanics. It was about understanding that information architecture is choice architecture -- and every choice you add to a page is a tax on conversion.
If you're designing a pricing experience right now, ask yourself: am I making the customer's decision simpler, or am I just making myself feel better about being transparent? The answer to that question is worth more than any best practice listicle.