Objective Prioritisation: RICE
Data-Driven Roadmaps for Teams Who Measure Everything
This is one of RoadmapOne’s articles on Objective Prioritisation frameworks .
The deadliest enemy of a product roadmap isn’t bad ideas—it’s the inability to kill good ideas fast enough. Every backlog contains dozens of worthy objectives, each championed by smart people with compelling logic. Without a quantitative forcing function, roadmap debates devolve into politics: whoever argues loudest, or ranks highest, wins. RICE prioritisation solves this by transforming qualitative arguments into comparable scores. Reach times Impact times Confidence divided by Effort produces a single number. Sort by that number, draw the capacity line, and the roadmap writes itself.
Developed by Intercom’s product team, RICE emerged from a practical frustration: product managers were arguing past each other because “impact” meant different things to different people. One PM’s “high impact” was another’s “nice to have.” RICE forced everyone to define impact numerically and then balanced it against reach, confidence, and effort. The result was a framework that aligned teams not by building consensus, but by making assumptions explicit and comparable.
TL;DR: RICE is the gold standard for data-rich product teams who can measure reach and impact with precision. It excels at surfacing high-value, low-effort wins and exposing pet projects masquerading as strategic priorities. But RICE betrays you when data is thin, effort estimates are delusional, or strategic bets require leaps of faith that quantification destroys.
The Four Dimensions of RICE
RICE scores every objective across four dimensions: Reach, Impact, Confidence, and Effort. The formula multiplies the first three and divides by the fourth to produce a value-per-effort metric. High RICE scores indicate objectives that affect many people, deliver meaningful impact, enjoy strong data support, and require modest resources. Low RICE scores flag work that touches few users, delivers marginal gains, rests on shaky assumptions, or burns months of capacity.
Reach: How Many People Benefit?
Reach quantifies the number of users, customers, or transactions affected by this objective within a defined time period. If you’re building a feature for your onboarding flow, Reach might be “75% of new signups per quarter”—if you get 10,000 signups quarterly, that’s 7,500 people. If you’re optimising a checkout bug affecting 5% of transactions, and you process 50,000 transactions monthly, Reach is 2,500 per month.
Reach forces precision about audience size. Product managers love to claim every feature affects “all users,” but RICE demands honesty. That power-user feature you’re excited about? It reaches 8% of your base, not 100%. That enterprise integration? It touches 30 customers out of 5,000. Quantifying reach exposes how much of your capacity you’re spending on marginal audiences.
The trap with Reach is double-counting. If you have three objectives all targeting the same cohort, you can’t claim full reach for all three without admitting overlap. Mature teams calibrate reach by tracking historical adoption curves: how many users actually engage with new features in their first quarter? The answer is usually far lower than product managers’ fantasies.
Impact: How Much Does Each Person Benefit?
Impact measures the magnitude of effect per person affected. RICE uses a simple scale: Massive impact (3×), High impact (2×), Medium impact (1×), Low impact (0.5×), Minimal impact (0.25×). This forces teams to distinguish between features that transform user behaviour and features that shave milliseconds or move buttons.
Massive impact means fundamental behaviour change or major pain elimination—think enabling a capability users couldn’t approximate before, or removing a top-three customer complaint. High impact is meaningful improvement to existing workflows. Medium impact is noticeable but not transformative. Low impact is marginal refinement. Minimal impact is polish that only obsessives notice.
The discipline of Impact scoring is admitting most work is Medium or Low. Teams habitually inflate impact because everyone believes their objective matters. The cure is calibration against shipped features: go back two quarters, list everything you scored as “High impact,” and measure actual adoption or satisfaction change. If your High impact features moved metrics 3-8%, and your Medium features moved them 1-3%, your calibration is working. If both buckets delivered similar results, you’re lying to yourselves.
Confidence: How Sure Are We?
Confidence scales capture how much data supports your Reach and Impact estimates. High Confidence (100%) means solid analytics or research validates your assumptions. Medium Confidence (80%) means directional data or analogous examples support the estimates. Low Confidence (50%) means educated guesses with minimal evidence.
Confidence is RICE’s forcing function against wishful thinking. That moonshot objective with “Massive impact” and “100% reach” might drop to 50% confidence because you’ve never shipped anything like it and have zero validation data. The RICE score gets cut in half, reflecting that uncertainty is risk, and risk should lower priority.
Low confidence isn’t failure—it’s honesty. The problem is when every objective claims High confidence because product managers fear looking uncertain. The fix is tying confidence to evidence types: customer interviews with 10+ participants might justify 80% confidence on Impact; analytics showing 50,000 users hit the pain point monthly justifies High confidence on Reach; pure intuition caps at 50%. Make confidence criteria explicit and score honestly.
Effort: How Much Will This Cost?
Effort measures person-months of work across all functions: engineering, design, product, marketing, ops. If a feature needs two engineers for two months, one designer for one month, and one PM for half a month, Effort is 5.5 person-months. RICE uses t-shirt sizes for speed: XS (1 person-month), S (3), M (6), L (12), XL (24), XXL (36).
Effort is where RICE scores often collapse. Teams systematically underestimate complexity, ignore non-engineering costs, and forget maintenance debt. The feature that “should take two sprints” becomes six months when you account for QA, documentation, customer education, support training, and inevitable scope creep.
Combat effort delusion with historical calibration. Track estimated versus actual effort for shipped objectives. If your “M” estimates (6 person-months) consistently take 10 person-months, you’re sandbagging or delusional. Adjust your scale or add a 1.5× fudge factor. Mature teams also separate build effort from run effort: that integration might cost 6 months to build but 0.5 person-months per quarter to maintain. RICE only captures build effort, so long-term maintenance costs slip through.
The RICE Formula in Action
The RICE score formula is simple: (Reach × Impact × Confidence) ÷ Effort. Multiply the first three dimensions, divide by Effort, and you get a value-per-effort metric that’s comparable across wildly different objectives.
Consider three objectives:
Objective A: Rebuild onboarding to reduce drop-off
- Reach: 7,500 users per quarter (75% of signups)
- Impact: High (2×)
- Confidence: High (100%)
- Effort: L (12 person-months)
- RICE Score: (7,500 × 2 × 1) ÷ 12 = 1,250
Objective B: Add dark mode to mobile app
- Reach: 5,000 users (50% of MAU)
- Impact: Low (0.5×)
- Confidence: Medium (80%)
- Effort: S (3 person-months)
- RICE Score: (5,000 × 0.5 × 0.8) ÷ 3 = 667
Objective C: Enterprise SSO integration
- Reach: 300 users (30 enterprise customers)
- Impact: Massive (3×)
- Confidence: High (100%)
- Effort: M (6 person-months)
- RICE Score: (300 × 3 × 1) ÷ 6 = 150
Objective A wins despite moderate impact because it reaches thousands of users with manageable effort. Objective B delivers less value per user but costs little, making it a solid quick win. Objective C transforms a small audience—enterprises love SSO—but the limited reach caps its score. RICE exposes that onboarding improvements deliver 8× more value per effort than enterprise features, even though the enterprise team argues louder.
This is RICE’s power: it makes trade-offs visible and quantitative. Arguments shift from “my feature matters more” to “here’s why my reach estimate is more accurate” or “here’s validation data that boosts my confidence.” The conversation becomes empirical, not political.
When RICE Is Your Best Weapon
RICE excels in specific contexts. If you have robust analytics tracking user behaviour, reliable reach estimates, and a culture of measurement, RICE transforms roadmap debates from opinion into arithmetic. Data-rich SaaS products with thousands of users, established engagement metrics, and product-led growth motions love RICE because every dimension has real numbers behind it.
RICE also shines when teams struggle to balance quick wins against big bets. The formula naturally surfaces high-reach, low-effort opportunities that might otherwise be dismissed as “too small.” Those 3-person-month features reaching 60% of users can score higher than 24-person-month features reaching 10%—and rightly so. RICE prevents the “bet the farm” bias where teams obsess over massive projects while ignoring compounding small wins.
Additionally, RICE is a forcing function against HiPPO (Highest Paid Person’s Opinion) prioritisation. When the CEO’s favourite feature scores 200 while the PM’s evidence-based alternative scores 1,400, the numbers create political cover for making the right call. RICE doesn’t eliminate politics, but it raises the cost of overriding data.
When RICE Betrays You
RICE collapses in three scenarios. First, when data is thin. Pre-launch startups or brand-new product categories have zero reach data, no impact history, and pure speculation on effort. Plugging guesses into RICE produces “garbage in, garbage out” scores that feel scientific but encode nothing but collective delusion. In these contexts, ICE or Manual prioritisation is more honest.
Second, when strategic bets require faith. Some objectives are 10× opportunities with 30% confidence and massive effort. RICE scores them low because the formula punishes uncertainty and rewards safe bets. But transformational innovation requires accepting low probability, high payoff bets. If you follow RICE strictly, you’ll optimise yourself into irrelevance. The fix is explicitly flagging “strategic override” objectives that bypass RICE scores for portfolio balance—tag them “Transformational” and fund a few despite low scores.
Third, when effort estimates are delusional. If engineering habitually underestimates complexity by 3×, your RICE scores are systematically inflated. The roadmap fills with “high RICE” objectives that turn into death marches because Effort was fantasy. The cure is brutal calibration: track estimated versus actual effort, publicise the deltas, and penalise repeat offenders. Scoring only works when inputs are honest.
Practical Implementation
Start by scoring your top 30 objectives. Gather product, engineering, and data leads. For each objective, debate and assign Reach (actual user count), Impact (0.25× to 3×), Confidence (50-100%), and Effort (1-36 person-months). Calculate scores. This takes 2-3 hours and surfaces wildly different intuitions about reach and impact that teams didn’t know they had.
Sort by RICE score and draw the capacity line. If you have three squads for four quarters, you can fund roughly 12-18 objectives depending on size. Everything above the line is funded. Everything below waits. The line is brutal but clarifying—it forces acknowledgment that not everything fits.
Generate your RICE report in RoadmapOne and present it alongside tag distributions. The RICE ranking shows value-per-effort sequencing. The tag heatmap shows whether high-RICE objectives form a balanced portfolio. If the top 15 are all “Core” and zero “Transformational,” RICE optimised you into incrementalism. Override deliberately.
Re-score quarterly as data arrives. Last quarter’s 50% confidence becomes 100% when validation data lands. Last quarter’s Medium impact becomes Low when adoption disappoints. Continuous scoring keeps the roadmap aligned with reality, not outdated assumptions.
RICE and Strategic Balance
RICE is a tool, not a religion. Use it to structure debates, then override when strategic imperatives demand it. Fund the top RICE scores, but reserve 10-20% of capacity for low-RICE, high-strategic-value bets. Tag those overrides explicitly—“Strategic Bet” or “Transformational”—so future you understands why the top-scoring objectives didn’t get funded.
RoadmapOne makes RICE visible at portfolio scale. Score objectives, sort by RICE, and let the formula expose which work delivers maximum value per effort. Your roadmap stops being a political wishlist and becomes a quantitative instrument. Debates shift from lobbying to evidence, and the capacity line forces honesty about trade-offs.
RICE isn’t perfect—no framework is. But for data-rich teams who can measure reach and impact, it’s the closest thing product management has to objective truth. Score ruthlessly, cut below the line, and watch your roadmap transform from bloated to surgical.
For more on Objective Prioritisation frameworks, see our comprehensive guide .