"Can we A/B test our pricing?" is the question I hear most from product teams. The answer is yes, but probably not the way you're imagining. Naive price testing — showing different users different prices for the same product — is ethically questionable, legally risky, and a PR disaster waiting to happen.

I've helped teams run dozens of pricing experiments over the years. The ones that work don't test the price. They test everything around the price. And the results are often more impactful than a $5 price change would have been anyway.

Why Pricing Tests Are Different From Every Other A/B Test

Most A/B tests have a simple risk profile: if you show someone a worse button color or a clunkier layout, they have a slightly worse experience. Nobody writes an angry tweet about it. Pricing tests are fundamentally different.

The ethical problem is real. Charging User A $49/month and User B $59/month for the identical product is price discrimination. It feels unfair because it is unfair. Users aren't abstract conversion numbers — they're people who talk to each other.

The legal landscape is complicated. Price discrimination laws vary by jurisdiction, and while dynamic pricing is legal in many contexts (airlines, hotels, ride-sharing), the rules for SaaS and digital products are murkier. Robinson-Patman Act considerations apply in the US for B2B pricing. EU consumer protection laws add another layer.

Brand perception risk is the killer. Amazon learned this the hard way in 2000 when they tested different prices for DVDs based on user profiles. Customers compared notes, discovered the discrepancy, and the backlash was immediate. Amazon had to issue public apologies and refund the difference. Two decades later, companies still make this mistake.

The core issue: two users at the same company see different prices, one Slack message later, and your brand trust is destroyed. That's not a hypothetical — I've watched it happen.

What You Should Actually Test

The good news is that everything surrounding the price is fair game, and these elements often matter more than the number itself.

Test the Pricing Page, Not the Price

Your pricing page is a conversion funnel with dozens of variables. The actual dollar amount is just one of them — and rarely the most influential.

Layout and information hierarchy. Which plan appears first? How much whitespace separates tiers? Where does the feature comparison table sit relative to the CTAs? I've seen pricing page redesigns — same exact prices — lift revenue by over 20% just by changing how information was organized. If you want to think through how to set up these experiments properly, the approach is the same as any other page test.

Default plan highlighting. Most pricing pages visually emphasize one plan with a "Most Popular" badge, a different background color, or a slightly larger card. Which plan you highlight has an outsized impact on selection. Test whether highlighting the mid-tier vs. the upper-tier changes your revenue mix.

Value framing. Monthly vs. annual display is a classic test. Show the annual price as a monthly equivalent ("$29/mo billed annually") vs. the total annual cost ("$348/year") vs. the savings framing ("Save 17% with annual billing"). Each frame activates different mental models. Per-seat vs. flat-rate display, per-transaction vs. monthly — the same price expressed differently converts differently.

Discount presentation. "Save $120" vs. "Save 25%" vs. "Get 3 months free" can all describe the same discount. Which frame works best depends on your price point — percentage discounts feel bigger for expensive items, while absolute discounts feel bigger for cheap items. This is well-documented in behavioral economics, and it's testable in your specific context.

Anchoring Effects That Actually Move Revenue

Anchoring is the single most powerful psychological lever in pricing, and it's perfectly ethical to test.

The decoy effect is when you introduce a deliberately unattractive option that makes your target option look better by comparison. If your goal is to sell the Pro plan at $79/month, adding an Enterprise plan at $299/month with only marginally more features makes $79 feel like a steal. The decoy doesn't need to sell — it needs to make the target option look rational.

Price anchoring through plan ordering. Showing the expensive plan first changes how users perceive the mid-tier. When someone sees $299 first, $79 feels affordable. When they see $29 first, $79 feels expensive. Same price, different perception. Test which order your plans display in.

Round numbers vs. precise numbers. $100 feels like an estimate, a round figure someone picked without much thought. $97 feels calculated, like there's a reason for that specific number. Research suggests round numbers work better for emotional purchases and precise numbers work better for rational ones. But your product is unique — test it and look at the data.

Strikethrough pricing. Showing the original price crossed out next to a discounted price creates an anchor that makes the discount feel tangible. But overuse erodes trust. Test whether strikethrough pricing improves conversion without hurting perceived quality.

Getting at Price Sensitivity Without Direct Price Testing

If you genuinely need to understand willingness-to-pay, there are methods that don't require showing different users different prices.

Van Westendorp Price Sensitivity Meter. Survey users with four questions: at what price is this product too expensive, expensive but still worth considering, a bargain, and too cheap to trust? The intersections of these curves give you an acceptable price range. It's not perfect, but combined with behavioral data it's informative.

Conjoint analysis forces respondents to make tradeoffs between features and price points, revealing how much value they place on each feature. This is particularly useful for SaaS companies deciding which features belong in which tier.

Behavioral proxies. Track pricing page drop-off rates, time spent on the page, comparison table engagement, and plan selection patterns. If 80% of users select the cheapest plan, that's a signal — either the cheap plan is too good or the expensive plan isn't differentiated enough. These are critical validity considerations for any pricing research.

Digital Goods vs. Physical Goods: Different Optimization Surfaces

The economics of pricing experiments differ dramatically by product type.

Digital goods and SaaS have near-zero marginal cost per user. This means pricing is almost entirely about perception and value communication. Your optimization surface is enormous — a 10% price increase on a SaaS product with 85% gross margins is almost pure profit. But SaaS pricing also carries unique risks: recurring revenue means a price increase compounds monthly, churn implications ripple for months, and expansion revenue dynamics add complexity.

Physical goods have cost floors — materials, manufacturing, shipping. Your pricing optimization surface is constrained by margins. A 10% price increase on a physical product with 30% margins is meaningful but must be weighed against the elasticity effects on your e-commerce funnel.

The New User vs. Existing User Line You Must Not Cross

This is where I see the most damaging mistakes.

Never change prices on existing users without clear communication. Even if the change is justified, surprising users with a price increase destroys trust. If you're testing price changes, restrict the test to new customer cohorts only.

Grandfather clauses have outsized psychological impact. Telling existing users "your price stays the same" while raising prices for new users is generally accepted and appreciated. The goodwill this generates often outweighs the revenue you'd gain from forcing everyone to the new price.

Free-to-paid conversion is a different game. Testing the transition from free to paid — what triggers the upgrade prompt, how the paywall is framed, which features are gated — is standard practice and doesn't carry the same ethical baggage as testing different prices for the same paid product. This is fundamentally about understanding the tradeoffs of what to gate vs. what to give away.

Where New Analysts Go Wrong

The most common mistake I see: running a naive price A/B test where two users at the same company see different prices. The test might show a "statistically significant" lift for the higher price, but it only takes one Slack message between coworkers to destroy your brand trust. The short-term revenue gain is not worth the long-term credibility loss.

The second mistake is optimizing for initial conversion without considering churn. The plan with the highest signup rate might have the worst retention. A $29/month plan might convert at 2x the rate of the $49 plan, but if $29 users churn at 3x the rate, you've optimized for the wrong metric. Always track downstream revenue, not just signup conversion.

Pro Tip

Test the pricing PAGE, not the price itself. How you frame value matters more than the number. I've seen pricing page redesigns — identical prices — lift revenue 20%+ just by changing the layout, the anchoring, and the way value was communicated. The statistics behind these tests are the same as any other A/B test, but the business impact per test is typically 5-10x higher because you're directly influencing revenue per user, not just conversion rate.

Start with your pricing page analytics. Where do users drop off? Which plan do they hover over longest before choosing a different one? Which features do they expand in the comparison table? That behavioral data tells you what to test first — and it doesn't require showing anyone a different price.

Share this article
LinkedIn (opens in new tab) X / Twitter (opens in new tab)
Written by Atticus Li

Revenue & experimentation leader — behavioral economics, CRO, and AI. CXL & Mindworx certified. $30M+ in verified impact.