ICE Scoring Framework
A prioritization framework that scores experiment ideas by Impact, Confidence, and Ease — each on a 1-10 scale — to rank which tests to run first.
ICE scoring, developed by Sean Ellis, is the most widely used experiment prioritization framework. Each test idea gets three scores: Impact (how much will this move the target metric?), Confidence (how sure are we this will work?), and Ease (how quickly can we implement this?). The ICE score is typically the average or product of these three numbers.
How to Score Effectively
Impact should be based on potential metric movement, not gut feel. Look at the traffic volume to the page, the current conversion rate, and the size of the behavioral change you're proposing. A headline test on your highest-traffic landing page has higher impact potential than a button color test on a low-traffic page.
Confidence should reflect the strength of your evidence — qualitative research, heatmap data, competitor analysis, or previous test results. A test inspired by session recordings showing user confusion has higher confidence than a test inspired by a stakeholder's opinion.
Ease should account for design, development, QA, and data requirements. A copy-only test scores higher on ease than a full redesign requiring new components.
Limitations of ICE
ICE is subjective — different people score the same idea differently. It also doesn't account for strategic value (learning something important about your customer) or dependencies (this test must run before that one). Use ICE as a starting point, not a final answer.
Practical Application
Score your test backlog with ICE, then review the top 10 as a team. The value of ICE isn't precision — it's structured conversation. It forces teams to articulate why they believe a test will work, which improves hypothesis quality regardless of the score.