← All Blog Articles

Objective Prioritisation: ICE

Fast Roadmap Decisions for Teams Who Can't Afford Analysis Paralysis

· Mark Holt
Objective Prioritisation: ICE

This is one of RoadmapOne’s articles on Objective Prioritisation frameworks .

Here’s the inconvenient truth about most prioritisation frameworks: they demand data you don’t have. RICE wants reach estimates, but your product hasn’t launched yet. WSJF needs business value quantification, but you’re pre-revenue. MoSCoW assumes you know what’s a must-have, but the market just pivoted last week. Enter ICE prioritisation—the framework for teams who value speed over precision and need to ship roadmaps with incomplete information.

Created by Sean Ellis, the growth hacker who coined “growth hacking,” ICE was born from startup pragmatism. Ellis watched teams spend weeks arguing about perfect priority scores while competitors shipped. ICE simplifies ruthlessly: score every objective on Impact, Confidence, and Ease using 1-10 scales, multiply the three, and sort. The highest scores get funded. The entire backlog can be scored in 90 minutes, not 90 days.

Important

TL;DR: ICE is RICE without Reach—simpler, faster, and built for uncertainty. It excels when you’re moving quickly with thin data, testing hypotheses in new markets, or when precision analysis creates more confusion than clarity. But ICE betrays mature companies with robust analytics, and it systematically undervalues low-effort wins that RICE would surface.

The Three Dimensions of ICE

ICE scores objectives across three dimensions: Impact, Confidence, and Ease. Unlike RICE’s person-months and user counts, ICE uses relative 1-10 scales. This trades precision for speed—you’re not estimating “7,500 users reached”; you’re judging “is this a 7 or an 8 out of 10 for Impact?” The loss of precision is the point. When data is thin, false precision is worse than rough accuracy.

Impact: How Much Does This Move the Needle?

Impact in ICE measures how significantly this objective advances your primary goal. If you’re a growth-stage startup optimising for user acquisition, Impact is “how many more users will we get?” If you’re focused on reducing churn, Impact is “how much will retention improve?” The scale is 1-10, with 10 being transformational and 1 being marginal.

The brilliance of relative scales is they force brutal honesty without demanding data you don’t have. You can’t measure exact acquisition lift for an unbuilt feature, but you can compare it to past features and estimate whether this feels like a 4, 7, or 9 relative to what you’ve shipped before. The comparisons calibrate the team’s intuition, aligning everyone’s internal 1-10 scales.

The trap is impact inflation. When every feature is “definitely a 9 or 10,” the scoring collapses. Combat this by forcing a distribution: no more than 20% of objectives can score 9-10, no more than 30% can score 7-8, and at least 20% must score 1-4. Enforced distributions prevent everyone from claiming their pet project is transformational.

Confidence: How Sure Are We This Will Work?

Confidence scores your conviction that the objective will deliver its estimated impact. A 10 means you have strong validation data—customer interviews, A/B test results, analogous successes. A 5 means it’s an educated guess with directional evidence. A 1 means you’re gambling on pure intuition with zero supporting data.

Confidence is ICE’s forcing function against optimism. That “game-changing” feature might score 9 on Impact, but if you’ve never built anything like it and have no validation, Confidence drops to 3. The total ICE score (9 × 3 × Ease) reflects that uncertainty is risk, and risky bets should rank lower than validated ones.

Low confidence isn’t a failure signal—it’s an honesty signal. The problem emerges when product managers treat confidence as a weakness and inflate scores to avoid looking uncertain. The fix is rewarding honesty: celebrate teams that score 3 on Confidence, run cheap validation experiments, and update to 8 with evidence. Learning should boost confidence scores; guessing shouldn’t.

Ease: How Simple Is This to Ship?

Ease inverts Effort from RICE. While RICE measures person-months (higher is worse), ICE measures ease (higher is better). A 10 means this can ship in days with minimal resources. A 1 means this is a multi-quarter, cross-team odyssey requiring heroic effort.

Ease scores compress effort estimation into gut feel. You’re not calculating person-months across engineering, design, and QA; you’re pattern-matching against past work and estimating “is this easier or harder than that thing we shipped last quarter?” The loss of precision is the gain of speed. Scoring 50 objectives on Ease takes 30 minutes versus the 3-hour effort estimation workshop RICE demands.

The danger is ease inflation. Engineers habitually underestimate complexity, and ICE’s 1-10 scale hides that optimism. A feature scored 8 on Ease (seems pretty easy) might turn into a 4-month nightmare once you account for edge cases, integrations, and technical debt. Calibrate by tracking estimated Ease versus actual delivery time. If your 8s consistently take three months, your scale is broken. Adjust or apply a pessimism multiplier.

The ICE Formula in Action

The ICE formula is elegantly simple: Impact × Confidence × Ease. Multiply the three 1-10 scores to get a value between 1 and 1,000. Sort by score, draw the capacity line, and fund what’s above it.

Consider three objectives:

Objective A: Add referral program to drive viral growth

  • Impact: 9 (could double acquisition if it works)
  • Confidence: 4 (we’ve never done referrals before, no validation)
  • Ease: 6 (moderate engineering complexity)
  • ICE Score: 9 × 4 × 6 = 216

Objective B: Improve onboarding tutorial clarity

  • Impact: 7 (should meaningfully reduce early churn)
  • Confidence: 8 (user research shows confusion points clearly)
  • Ease: 9 (copy changes and minor UI tweaks)
  • ICE Score: 7 × 8 × 9 = 504

Objective C: Build AI-powered feature recommendation engine

  • Impact: 10 (could transform engagement)
  • Confidence: 3 (ML is unproven in our domain)
  • Ease: 2 (requires ML infrastructure we don’t have)
  • ICE Score: 10 × 3 × 2 = 60

Objective B wins despite moderate impact because it combines strong validation (high Confidence) with low effort (high Ease). Objective A’s transformational potential is undermined by low confidence and moderate ease. Objective C is a moonshot—high impact, low confidence, hard to build—so ICE ranks it last. The framework systematically surfaces quick, validated wins over speculative mega-projects.

This is ICE’s strength: it prioritises learning velocity over swing-for-the-fences bets. When you’re a startup with nine months of runway, shipping validated improvements fast beats betting the farm on an AI feature that might not work. ICE keeps you alive long enough to find product-market fit.

When ICE Is Your Best Weapon

ICE thrives in high-uncertainty, high-velocity environments. Early-stage startups with no analytics, new product lines without historical data, and teams pivoting into unfamiliar markets love ICE because it doesn’t demand precision you can’t provide. You score based on best judgment, ship, learn, and re-score next sprint with better data.

ICE also excels when analysis paralysis is killing execution. If your team spends three weeks debating whether Reach is 6,000 or 8,000 users, ICE short-circuits the debate: “Is Impact a 7 or 8? Good enough, move on.” The framework trades precision for momentum, which is often the right trade when market windows close faster than perfect data arrives.

Additionally, ICE is a forcing function for small, iterative bets. Because Ease is a multiplier, high-ease objectives naturally score well even with moderate impact. This surfaces quick wins that might be dismissed as “too small” in more elaborate frameworks. For teams embracing lean startup principles or continuous delivery, ICE’s bias toward easy-to-ship work aligns perfectly with cultural values.

When ICE Betrays You

ICE collapses in three scenarios. First, when data-richness makes precision possible. Mature SaaS companies with millions of users, robust analytics, and historical delivery data should use RICE, not ICE. If you can measure exact reach and effort in person-months, using relative 1-10 scales is throwing away information. ICE is for when you don’t have better data, not when you’re too lazy to use it.

Second, when strategic bets require audacious swings. ICE’s multiplicative formula severely punishes low Ease or low Confidence scores. That transformational infrastructure rebuild scored 10 on Impact and 8 on Confidence but only 2 on Ease? ICE score: 160. That validated quick win scored 5 on Impact, 9 on Confidence, and 10 on Ease? ICE score: 450. ICE tells you to ship the quick win, but sometimes the company’s future depends on the hard, transformational bet. If you follow ICE blindly, you’ll incrementally improve yourself into irrelevance.

Third, when effort estimation is delusional. ICE’s Ease dimension is a feeling, not a measurement. If your team habitually overestimates ease (scoring 8s that turn into 6-month projects), your ICE scores are systematically inflated. You’ll fund a roadmap of “high ICE” objectives that become death marches. The cure is calibrating Ease against actual delivery times and adjusting scales to match reality.

Practical Implementation

Start by scoring your top 30 objectives. Gather product, engineering, and growth leads for a 90-minute workshop. For each objective, debate and assign Impact (1-10: how much does this move our primary metric?), Confidence (1-10: how sure are we it works?), and Ease (1-10: how quickly can we ship it?). Calculate scores by multiplying. This takes one session, not one week.

Enforce distribution constraints to prevent score inflation. No more than 20% of objectives can score 9-10 on any dimension. This forces teams to reserve 9s and 10s for genuinely exceptional cases and distributes scores honestly across the range.

Sort by ICE score and draw the capacity line. If you have two squads for one quarter, fund the top 8-12 objectives depending on Ease estimates. Everything above the line gets resourced. Everything below waits for next quarter or dies. The line is your portfolio’s reality check.

Generate your ICE report in RoadmapOne and review it alongside tag distributions. ICE ranking shows which objectives maximise impact-confidence-ease. Tag heatmaps show whether those objectives form a balanced strategic portfolio. If the top 10 are all “Core” and zero “Transformational,” ICE optimised you into incrementalism. Override deliberately for strategic balance.

Re-score every sprint or monthly as you learn. Last sprint’s Confidence 4 becomes 8 when you validate the hypothesis. Last quarter’s Ease 7 becomes 4 when you discover hidden complexity. Continuous re-scoring keeps ICE aligned with reality, not outdated guesses. Fast scoring enables fast iteration.

ICE and Strategic Pragmatism

ICE isn’t RICE-lite—it’s a different philosophy. RICE assumes you have data and should use it. ICE assumes data is thin and speed matters more than precision. Use ICE when you’re moving fast with incomplete information, testing hypotheses in uncertain markets, or when elaborate frameworks create more heat than light.

But ICE is a starting point, not a destination. As you grow, transition to RICE when reach and effort become measurable. Keep ICE for exploratory bets and new market experiments where data doesn’t exist yet. Many mature companies run dual systems: RICE for core product roadmap, ICE for innovation lab projects. The frameworks aren’t mutually exclusive—they’re contextual tools.

RoadmapOne makes ICE visible at portfolio scale. Score objectives in 90 minutes, sort by ICE, and fund what scores highest. Debates shift from endless estimation to “how can we validate confidence?” and “how can we reduce effort?” The roadmap stops being paralysed by missing data and starts shipping learning loops that generate data.

ICE is fast, scrappy, and unapologetically pragmatic. It won’t give you perfect answers, but it will give you decisions fast enough to matter. For startups and teams navigating uncertainty, that’s often the difference between survival and slow death by analysis paralysis.

For more on Objective Prioritisation frameworks, see our comprehensive guide .