Every A/B test has a hidden variable that no statistical model accounts for: the psychological anchor set by whichever variant the user encounters first. In behavioral economics, anchoring is the cognitive bias where an initial piece of information disproportionately influences subsequent judgments. In experimentation, this means your control variant isn't just a baseline for measurement — it's actively shaping how users perceive and evaluate the treatment.

This is not a subtle effect. Amos Tversky and Daniel Kahneman demonstrated that even arbitrary numbers — the spin of a roulette wheel, a random figure written on a whiteboard — can dramatically shift people's estimates of unrelated quantities. When the anchor is something as meaningful as a product's current pricing, layout, or feature set, the bias becomes even more pronounced. Your users aren't evaluating your new variant on its own merits. They're evaluating it relative to an anchor they may not even consciously remember.

For experimentation teams, this creates a fundamental challenge: the very act of having a control variant introduces a cognitive distortion that your test cannot measure. Understanding this bias doesn't invalidate A/B testing, but it does demand a more sophisticated approach to experiment design and interpretation.

How Anchoring Works in the Brain

Anchoring operates through a mechanism called insufficient adjustment. When presented with an initial value, people use it as a starting point and then adjust up or down to reach their estimate. The problem is that these adjustments are consistently insufficient — they stop too close to the anchor. This isn't laziness; it's a fundamental feature of how human cognition conserves energy. The brain treats the anchor as a plausible starting point and looks for reasons to stop adjusting rather than reasons to continue.

In the context of digital products, this means that a user who has been exposed to a pricing page showing a base plan at a certain price point will evaluate all subsequent price points relative to that anchor. If your A/B test changes the base price, users in the treatment group aren't evaluating the new price in isolation — they're comparing it to their memory of the old price, even if they only saw it briefly during a previous visit.

This is why price increase tests often show more dramatic negative effects than price decrease tests show positive effects. The original price serves as an anchor, and any upward movement feels like a loss — triggering loss aversion on top of the anchoring effect. The two biases compound each other, creating an asymmetric response that pure economic theory would not predict.

The First-Exposure Problem in Sequential Testing

Sequential testing — where users might see different variants across multiple sessions — amplifies the anchoring problem. Consider a user who visits your product page on Monday and sees the control variant with a particular value proposition. They return on Wednesday and are assigned to the treatment variant with a different headline, different imagery, or different pricing. Their evaluation of the treatment isn't fresh; it's contaminated by the Monday anchor.

This contamination is invisible in your analytics. The Wednesday session looks like a clean exposure to the treatment variant. But psychologically, the user is performing a comparison, not an evaluation. They're asking 'Is this better or worse than what I saw before?' rather than 'Does this meet my needs?' These are fundamentally different cognitive processes, and they produce fundamentally different conversion patterns.

The practical implication is that A/B tests on returning visitor segments may be measuring something very different from what they appear to measure. What looks like a preference for one variant may actually be a preference for consistency — or, conversely, a novelty effect triggered by the contrast with the remembered anchor.

Anchoring in Price Testing

Price testing is where anchoring bias creates the most significant distortions. When you test a higher price against your current price, you're not just testing willingness to pay — you're testing willingness to pay more than the anchor. These are different quantities. A user with no prior exposure to your product might happily pay the higher price. But a user who has seen the lower price — even once, even weeks ago — experiences the higher price as a loss, not merely a different number.

This is why many pricing experiments produce misleadingly negative results. The test concludes that users won't pay a higher price, when in reality, the test only proved that users won't pay more than their anchor. A completely new cohort with no prior exposure might respond entirely differently. This distinction matters enormously for business decisions about pricing strategy.

Smart experimentation teams address this by segmenting results between new and returning users, running tests exclusively on new visitors, or using longer test durations that allow the anchor effect to decay. None of these approaches perfectly eliminate the bias, but they produce more accurate signals about true price sensitivity.

Designing Experiments That Account for Anchoring

Acknowledging anchoring bias doesn't mean abandoning A/B testing. It means designing experiments with greater awareness of the cognitive context in which they operate. Several practical strategies can mitigate the distortion.

First, isolate new users for price sensitivity tests. New users have no anchor for your product, so their responses more accurately reflect true willingness to pay. This reduces sample size but dramatically improves signal quality.

Second, use relative rather than absolute changes in treatment variants. Instead of testing a completely different price point, test the framing around the same price. Change the comparison context, the feature emphasis, or the payment structure. This keeps the anchor stable while varying the factors that influence perceived value.

Third, extend test duration to account for anchor decay. Anchoring effects diminish over time as new information displaces old reference points. A test that runs for two weeks will show different results than one that runs for two days, partly because the anchor's influence weakens as users accumulate new experiences and reference points.

The Meta-Lesson for Experimentation Culture

Anchoring bias in A/B testing is ultimately a reminder that experiments don't occur in a psychological vacuum. Every test operates within a context shaped by prior exposures, existing mental models, and cognitive shortcuts that your analytics platform cannot observe. The numbers in your dashboard are real, but the story they tell is always filtered through human cognition.

The most sophisticated experimentation teams don't just run tests and read results. They develop theories about the psychological mechanisms driving those results and design follow-up experiments to test those theories. When a price test fails, they don't simply conclude that users won't pay more. They ask whether the result reflects true price sensitivity or anchoring bias, and they design the next experiment to distinguish between the two.

This is the difference between an experimentation practice that generates data and one that generates understanding. Data tells you what happened. Understanding tells you why, and more importantly, whether it would happen again under different conditions. Anchoring bias is one of the most common reasons why it wouldn't — and recognizing that is the first step toward experiments that actually inform good decisions.

Share this article
LinkedIn (opens in new tab) X / Twitter (opens in new tab)
Written by Atticus Li

Revenue & experimentation leader — behavioral economics, CRO, and AI. CXL & Mindworx certified. $30M+ in verified impact.