Revenue Per Session
Total revenue divided by total sessions — a more comprehensive optimization metric than conversion rate because it accounts for both conversion and order value.
Revenue Per Session (RPS) is the metric I recommend as the primary success measure for most experimentation programs. It captures what conversion rate alone misses: the full economic impact of a change.
Why RPS Beats Conversion Rate
A test might increase conversion rate by 10% while decreasing average order value by 15% — resulting in a net revenue loss. Conversion rate would celebrate this as a win. RPS would correctly flag it as a loss.
RPS = Total Revenue / Total Sessions. It naturally accounts for:
- Changes in conversion rate
- Changes in average order value
- Changes in items per order
- Changes in upsell/cross-sell effectiveness
When to Use RPS vs. Conversion Rate
- Use RPS when the test could affect both likelihood of purchase AND purchase value (pricing, product pages, checkout)
- Use conversion rate when the test only affects whether someone converts, not how much they spend (lead gen forms, newsletter signups)
- Use both when you're not sure — and let them tell you if there's a divergence
The RPS Framework for Prioritization
I use RPS projections to prioritize the testing roadmap. For any proposed test:
1. Estimate the likely RPS impact range
2. Multiply by monthly sessions to get monthly revenue impact
3. Compare against engineering/design cost to implement
4. Rank by expected ROI
This forces teams to focus on high-impact tests rather than low-stakes button-color changes.