Objective Prioritisation: Opportunity Scoring
Find the Gaps Between What Matters and What Satisfies
This is one of RoadmapOne’s articles on Objective Prioritisation frameworks .
Most prioritisation frameworks optimise for impact or effort. Opportunity Scoring optimises for something fundamentally different: customer dissatisfaction. Developed by Tony Ulwick (creator of Jobs-to-be-Done), Opportunity Scoring asks two questions for every objective: “How important is this to customers?” and “How satisfied are customers with current solutions?” The gap between Importance and Satisfaction reveals your biggest opportunities—features customers desperately need but can’t get from existing products, including yours.
The formula is elegantly simple: Opportunity = Importance + max(Importance - Satisfaction, 0). This gives double weight to importance while penalising features where current solutions already satisfy. A feature customers rate 9 on Importance but 8 on Satisfaction has low opportunity (small gap). A feature rated 9 on Importance but 3 on Satisfaction has massive opportunity (huge gap). Opportunity Scoring doesn’t care about reach or effort—it cares about solving problems customers are screaming about.
TL;DR: Opportunity Scoring prioritises by finding gaps between customer importance and satisfaction. It excels at customer-centric roadmaps, Jobs-to-be-Done alignment, and surfacing unmet needs competitors miss. But it requires customer research infrastructure, ignores business strategy and technical complexity, and only works if your customer segments are well-defined.
The Two Dimensions of Opportunity Scoring
Unlike frameworks that multiply dimensions, Opportunity Scoring uses just two: Importance (how critical is this capability to customers accomplishing their goals?) and Satisfaction (how well do current solutions deliver this capability?). Both are scored 1-10, typically via customer surveys. The magic is in the gap—when Importance is high and Satisfaction is low, you’ve found an opportunity.
Importance: How Much Do Customers Care?
Importance measures how critical this objective’s outcome is to customers achieving their goals. It’s not “do customers want this feature?"—it’s “how important is the underlying need this feature addresses?” In Jobs-to-be-Done language, Importance asks: “How much does solving this job step matter to successful job completion?”
Importance is discovered through customer research, not product manager intuition. You survey customers asking: “When you’re trying to [accomplish goal X], how important is [outcome Y] on a scale of 1-10?” For example, if you’re building accounting software, you might ask: “When closing monthly books, how important is ‘minimise time spent reconciling transactions’ on a scale of 1-10?” Customers scoring it 9-10 signal this outcome is critical.
The discipline of Importance scoring is separating features from outcomes. “I want a keyboard shortcut” is a feature request. “I want to minimise time spent on repetitive data entry” is an outcome. Opportunity Scoring measures Importance of outcomes, then you design features to deliver those outcomes. This prevents building the wrong solution to the right problem.
Importance is relatively stable. What matters to customers about accomplishing a job doesn’t change weekly—closing books faster will always be important to accountants. This stability means Importance scores don’t churn like market-driven urgency metrics. You can survey once per year and trust the data.
Satisfaction: How Well Are Current Solutions Working?
Satisfaction measures how well existing solutions—including your product, competitors’ products, and manual workarounds—currently deliver the important outcome. It’s scored 1-10 via customer surveys: “How satisfied are you with current solutions for [outcome Y] on a scale of 1-10?”
Low Satisfaction reveals pain points. If customers rate “minimise time spent reconciling transactions” as 9 on Importance but 3 on Satisfaction, they’re spending hours on tedious reconciliation and hating it. That’s an opportunity. If they rate it 9 on Importance and 9 on Satisfaction, they’re happy with existing tools—no opportunity there, even though it’s important.
Satisfaction captures competitive context. You might think your feature is great, but if customers rate Satisfaction at 4, either you haven’t solved it, competitors haven’t solved it, or you’ve solved the wrong aspect of it. Satisfaction is the market’s verdict on whether this problem has been adequately addressed yet.
Satisfaction changes over time as products evolve. Last year’s 3 Satisfaction score might be this year’s 7 if you shipped a killer feature. Continuous monitoring reveals when opportunities close (Satisfaction rises) or open (Satisfaction drops as customer expectations increase or competitors regress).
The Opportunity Scoring Formula in Action
The Opportunity Scoring formula is: Opportunity = Importance + max(Importance - Satisfaction, 0). This formula doubles the weight of Importance while adding the gap between Importance and Satisfaction (if positive). Let’s see how it works.
Consider three objectives based on customer survey data (1-10 scale):
Objective A: Minimise time reconciling transactions
- Importance: 9 (customers say this is critical)
- Satisfaction: 3 (current tools are terrible)
- Opportunity Score: 9 + max(9 - 3, 0) = 9 + 6 = 15
Objective B: Ensure data accuracy during import
- Importance: 10 (customers cannot tolerate errors)
- Satisfaction: 8 (existing tools mostly handle this well)
- Opportunity Score: 10 + max(10 - 8, 0) = 10 + 2 = 12
Objective C: Customise report templates
- Importance: 5 (nice to have but not critical)
- Satisfaction: 6 (current tools adequate)
- Opportunity Score: 5 + max(5 - 6, 0) = 5 + 0 = 5 (no gap; satisfaction exceeds importance)
Objective A wins despite lower absolute Importance (9 vs 10) because the Satisfaction gap is massive. Customers desperately need better reconciliation—it’s important and current solutions suck. Objective B is slightly more important (10) but satisfaction is already high (8), so the opportunity is smaller. Objective C has low importance and adequate satisfaction—no opportunity.
This is Opportunity Scoring’s insight: the biggest opportunities aren’t where customers want something most—they’re where customers want something a lot and can’t get it anywhere. You’re not competing to build the most important features; you’re competing to close the widest gaps between importance and satisfaction.
When Opportunity Scoring Is Your Best Weapon
Opportunity Scoring excels in four contexts. First, customer-centric product strategies where user needs drive roadmaps. If your organisation values customer research and designs products around Jobs-to-be-Done, Opportunity Scoring provides quantitative prioritisation aligned with that philosophy. You’re explicitly building what customers need most and lack most.
Second, differentiation plays where you want to outcompete on unmet needs. Competitors build features customers already have (high Importance, high Satisfaction—no opportunity). You find gaps competitors missed (high Importance, low Satisfaction) and own those outcomes. Opportunity Scoring systematically surfaces competitive white space.
Third, B2B SaaS with well-defined customer segments. If you can survey 50+ customers per segment and get statistically meaningful Importance/Satisfaction data, Opportunity Scoring is quantitatively rigorous. Your roadmap reflects actual customer pain, not product manager hunches.
Fourth, reducing feature bloat by killing low-opportunity work. That feature engineering loves? Importance 4, Satisfaction 6—Opportunity score 4. Kill it. That feature sales swears will close deals? Importance 6, Satisfaction 7—Opportunity score 6. Push back. Opportunity Scoring is a defensive weapon against building things customers don’t need.
When Opportunity Scoring Betrays You
Opportunity Scoring collapses in four scenarios. First, when customer research infrastructure is weak. If you can’t survey 30+ customers per segment twice a year, your Importance and Satisfaction scores are anecdotal noise. Opportunity Scoring demands research rigour that many teams lack. Without statistically meaningful data, you’re prioritising based on the opinions of whichever three customers yelled loudest.
Second, when business strategy diverges from customer needs. Opportunity Scoring optimises for customer satisfaction gaps—but what if your business needs profitability, and customers want free features? Or enterprise deals, but consumers dominate your survey sample? Opportunity Scoring is blind to business strategy, revenue models, and unit economics. It’s purely customer-centric, which is perfect until it’s disastrous.
Third, when technical complexity varies wildly. Opportunity Scoring ignores effort. That 15-score opportunity might require three months of work; that 12-score opportunity might take one week. Opportunity Scoring tells you what to build, but not in what order if effort matters. You’ll need to overlay effort considerations or run separate RICE/ICE scoring to sequence high-opportunity objectives.
Fourth, when customer segments conflict. Enterprise customers rate “SSO integration” as Importance 10, Satisfaction 2 (Opportunity 18). Consumer customers rate it Importance 2, Satisfaction 5 (Opportunity 2). Whose opportunity do you optimise for? Opportunity Scoring assumes a homogeneous customer base or requires running separate analyses per segment—adding complexity and creating trade-offs the framework doesn’t resolve.
Practical Implementation
Start by defining your customer segments clearly. Are you scoring for all customers, or specific personas, or job contexts? “All SaaS users” is too broad. “Finance managers at Series A startups closing monthly books” is specific enough to get meaningful data.
Design your survey around outcomes, not features. For each objective on your backlog, identify the underlying outcome it delivers. “Build keyboard shortcuts” becomes “Minimise time spent on repetitive data entry.” Survey customers: “When [doing job X], how important is [outcome Y]? (1-10)” and “How satisfied are you with current solutions for [outcome Y]? (1-10).”
Survey 30-50+ customers per segment for statistical validity. Fewer than 30 and you’re guessing; more than 100 and you’re gold. Send the survey quarterly or bi-annually depending on product velocity. Importance is stable; Satisfaction shifts as your product and competitors evolve.
Calculate Opportunity scores using Importance + max(Importance - Satisfaction, 0). Sort by Opportunity score. Anything scoring above 12 is a strong opportunity (high importance with meaningful gap). Anything scoring below 8 is low opportunity—either not important, or already well-satisfied.
Overlay effort considerations if needed. You might fund a 13-score opportunity requiring one month before a 16-score opportunity requiring six months. Opportunity Scoring identifies what matters; you still need judgment about sequencing based on effort, dependencies, and strategy.
Present Opportunity scores alongside tag distributions in RoadmapOne. Opportunity ranking shows customer-driven priorities. Tag heatmaps show whether those priorities form a balanced strategic portfolio. If your top 10 Opportunity scores are all “Quick Wins” (low complexity), great. If they’re all “Major Projects,” you’re facing a tough sequencing decision.
Re-survey annually or when your product evolves significantly. As you ship features closing Satisfaction gaps, Opportunity scores drop—that’s success. New opportunities emerge as customer expectations rise or market context shifts. Continuous Opportunity scoring keeps your roadmap aligned with current customer pain, not last year’s research.
Opportunity Scoring and Customer Truth
Opportunity Scoring is prioritisation for teams who trust customer research more than product manager intuition. It forces roadmaps to address what customers need most and have least—surfacing competitive differentiation opportunities that gut-feel prioritisation misses.
But Opportunity Scoring is only as good as your research. Survey bad questions, get bad scores. Survey the wrong customers, optimise for the wrong segment. Ignore business strategy, build features customers love that bankrupt the company. And if you can’t survey statistically meaningful samples, Opportunity Scoring is expensive theatre producing unreliable scores.
RoadmapOne makes Opportunity Scoring visible at portfolio scale. Survey customers, calculate Opportunity scores, and fund high-opportunity objectives. When stakeholders argue for their pet features, you don’t debate—you point at customer data and say “This scores 6 on Opportunity. Customers neither value it highly nor lack it. Here’s what scores 15.”
Opportunity Scoring won’t fix weak customer research, unclear segments, or strategy-customer misalignment. But when you have solid research and customer-centric strategy, it quantifies what customers are begging you to build—and what they’ll happily ignore. Build the former, kill the latter, and watch satisfaction and retention climb.
Your roadmap should solve customer problems, not guess at them. Opportunity Scoring measures the gaps between what matters and what satisfies. Close the biggest gaps first, and customers will reward you with loyalty competitors can’t match.
For more on Objective Prioritisation frameworks, see our comprehensive guide .