Stack Ranking: The Prioritisation Panacea That Never Works
Why 'Just Put Them in Order' Ignores Reality
This is one of RoadmapOne ’s articles on Objective Prioritisation frameworks .
Stack ranking feels like the answer to all prioritisation problems. Take your initiatives, force-rank them from 1 to N, layer them onto the roadmap in order, and you’re done. Clean. Simple. Decisive.
I’ve never once seen it work in practice.
The appeal is obvious. Stack ranking promises to cut through endless debates about relative priority. No more “these are both high priority” hedging. No more tied scores in your RICE calculations. Just a clear sequence: this, then this, then this. Leadership loves it because it looks decisive. Consultants love it because it fills workshop time. Everyone loves it until they try to execute.
Stack ranking—forcing initiatives into a strict 1-to-N order—sounds like prioritisation nirvana but fails in practice. It ignores capacity constraints, team dependencies, and skill fungibility. Debates about whether A ranks above B consume hours without adding value. Often the item ranked last must be implemented before the item ranked first. Use scoring frameworks like RICE or BRICE that account for reality, not forced rankings that pretend reality doesn’t exist.
Why Stack Ranking Fails
Nobody Can Agree on Order
The first problem is practical: nobody can agree whether A is more important than B.
Stack ranking assumes that importance is a single dimension you can order. But Objective A might be higher impact, while Objective B is more urgent, while Objective C is strategically important for a key partnership. Which ranks higher? The debate consumes hours. People defend their preferred initiatives with passion inversely proportional to the quality of their arguments.
Scoring frameworks like RICE handle this by making dimensions explicit. You can disagree about whether Impact is “Massive” or “High”—but at least you’re disagreeing about something specific. Stack ranking forces you to collapse multiple dimensions into a single order without making the trade-offs visible.
Artificial Distinctions Waste Time
Is initiative #7 really different from initiative #8?
Stack ranking forces distinctions that don’t exist. When you have 40 initiatives, the difference between position 15 and position 18 is meaningless noise. Yet stack ranking demands you decide. Teams spend hours debating positions that make no difference to execution.
Scoring frameworks allow ties. Initiatives 15-20 might all score 85-90—effectively equivalent priority, to be sequenced based on dependencies and capacity. Stack ranking forbids this pragmatic ambiguity and demands false precision.
It Ignores Capacity
Here’s where stack ranking really breaks: it’s divorced from capacity.
You rank 40 initiatives. Beautiful. Now what? Your teams have finite capacity. Maybe you can fund 15 initiatives this quarter. Stack ranking says “do 1-15, defer 16-40.” But what if initiatives 1-5 all require your platform team, who can only handle 2? What if initiative #3 requires skills that only exist in Squad B, who are already committed to initiative #1?
Stack ranking assumes capacity is infinite and fungible. Reality isn’t. Capacity-based planning exists precisely because sequencing without capacity constraints is fantasy.
It Ignores Dependencies
Often the thing ranked at the bottom has to be implemented before you can do the thing ranked at the top.
Initiative #1: “Launch premium tier with advanced analytics.” Initiative #37: “Refactor data pipeline to support analytics queries.”
Stack ranking says do #1 first. Reality says you can’t. The dependency graph doesn’t care about your ranking. You must do #37 before #1 regardless of where they sit in your beautifully ordered list.
WSJF and Cost of Delay at least account for dependencies through job duration and sequencing logic. Stack ranking pretends dependencies don’t exist.
It Assumes Fungible Teams
Stack ranking assumes all teams are interchangeable. Need to do initiative #5? Just assign a team. But teams aren’t fungible—at least not in the timeframe you’re building your roadmap.
Squad A has deep payment systems expertise. Squad B owns the mobile app. Squad C specialises in data infrastructure. In the medium term, yes, you can cross-train and shift focus. In the quarter you’re planning? Teams have specialisations, context, and existing commitments that constrain what they can realistically deliver.
Stack ranking ignores this. It produces a beautiful ordered list that no actual team structure can execute.
When Stack Ranking Appears to Work
Stack ranking sometimes appears to work in narrow contexts:
Very small lists. When you have 5-7 initiatives and need to pick 3, forced ranking can drive decision-making. The artificial precision matters less when the list is short enough for everyone to hold in their heads.
Executive triage. When leadership needs to make a quick call—“If we can only do one thing this quarter, what is it?"—stack ranking forces the conversation. It’s a forcing function, not a planning methodology.
Breaking ties. When RICE scores are identical and you genuinely need to sequence, forced ranking resolves the deadlock. But this is tie-breaking, not prioritisation.
Even in these cases, you’re not using stack ranking as a prioritisation framework. You’re using it as a decision-forcing technique for already-filtered options.
The Deeper Problem: Art vs Science
Stack ranking appeals because it promises to make prioritisation scientific. Objective. Mathematical. Just order the list and execute.
But roadmaps are as much art as science.
The “right” sequence depends on market timing, team morale, stakeholder relationships, learning curves, and a hundred other factors that don’t fit in a ranking algorithm. Sometimes you do initiative #8 before initiative #3 because #8 builds team confidence for the harder work ahead. Sometimes you defer the highest-ranked item because the market isn’t ready.
Good product leaders develop judgment about sequencing that transcends any framework. Stack ranking pretends this judgment is unnecessary—that you can mechanically derive sequence from importance. You can’t.
What to Use Instead
Scoring Frameworks
RICE , BRICE , ICE , and PIE produce scores that allow ties and natural groupings. Initiatives scoring 80-90 are “Tier 1.” Initiatives scoring 60-70 are “Tier 2.” Within tiers, sequence based on dependencies and capacity—not artificial forced rankings.
Value/Complexity Quadrants
Value vs Complexity produces four buckets rather than 40 positions. “Quick Wins” (high value, low complexity) go first. “Major Projects” (high value, high complexity) need careful planning. Buckets are honest about uncertainty; forced rankings pretend certainty you don’t have.
Capacity-Based Planning
Build your roadmap from capacity constraints, not abstract rankings. “We have 6 squads and 12 sprints. What can we actually deliver?” Then use scoring frameworks to decide which initiatives fill that capacity. Capacity is the constraint; scoring is the selection criteria.
Cost of Delay with Dependencies
Cost of Delay and WSJF explicitly model urgency and duration. When dependencies matter—and they always do—these frameworks account for sequencing realities that stack ranking ignores.
The Bottom Line
Stack ranking is a panacea that never delivers. It promises clean prioritisation but ignores capacity, dependencies, team skills, and reality. The debates it generates—“Is A above B?"—consume hours without adding value. And often the item ranked last must be implemented before the item ranked first, making the entire ranking useless.
Roadmaps are as much art as science. Good sequencing requires judgment, not just ordering. Use scoring frameworks that produce tiers and allow ties. Plan from capacity constraints. Model dependencies explicitly. And treat stack ranking as what it is: a decision-forcing technique for small lists, not a prioritisation methodology for real roadmaps.
References
- RICE Prioritisation — Scoring framework with explicit dimensions
- BRICE Prioritisation — RICE with strategic alignment
- Value vs Complexity — Quadrant-based prioritisation
- Cost of Delay — Urgency and dependency modelling
- Capacity-Based Planning — Planning from constraints
- Objective Prioritisation Frameworks — Complete guide to all frameworks