← All Blog Articles

Dot Voting Has No Business Near Your Roadmap

Democratic Theatre Belongs in Retros, Not Prioritisation

(updated Jan 24, 2026)
Dot Voting Has No Business Near Your Roadmap

This is one of RoadmapOne ’s articles on Objective Prioritisation frameworks .

Give everyone three dots. Stick them on the options you prefer. Count the dots. Most dots wins.

You’ve seen this in every design sprint, every retro, every “collaborative” workshop. Dot voting feels inclusive. It feels democratic. It feels like everyone’s voice matters equally.

Here’s the problem: a benevolent dictatorship tends to be a better approach to product management than outright democracy.

Dot voting compresses complex prioritisation decisions—business value, strategic alignment, effort, reach, confidence—into “how many people liked this option.” That’s not prioritisation. It’s a popularity contest.

TL;DR

Dot Voting is fine for low-stakes facilitation: which retro items to discuss, which logo option people prefer, which lunch spot to choose. For objective prioritisation? It’s theatre. Dots are unweighted—a junior designer’s dot counts the same as the CEO’s. There’s no effort dimension—a 12-month project competes equally with a 2-week win. And clustering behaviour produces false consensus that ignores controversial but important work. Keep Dot Voting in retros and design sprints. Keep it away from your roadmap.

How Dot Voting Works

The mechanics are almost insultingly simple:

  1. List options on a board. Post-its, whiteboard sections, cards—the format doesn’t matter.

  2. Give participants dots. Usually 3-5 dots per person, depending on how many options exist.

  3. Vote simultaneously. Participants place dots on their preferred options. They can put multiple dots on one option if they feel strongly.

  4. Count and rank. Most dots wins. Decision made.

The appeal is obvious: it’s fast, participatory, and produces a clear outcome. Everyone gets input. The process feels fair.

Why Teams Reach for Dot Voting

Dot voting solves a facilitation problem: how do you get a group to converge on a decision without endless debate?

In the 1990s, I worked with a leader who obsessively used this tool. It worked initially—it got small teams to have conversations they wouldn’t otherwise have. People who’d normally stay silent placed dots and suddenly had skin in the game.

But facilitation success doesn’t mean prioritisation success. Just because a technique helps a workshop run smoothly doesn’t mean it produces good roadmap decisions.

When Dot Voting Works

Let me be fair: dot voting has its place.

Low-Stakes Facilitation

“Which three retro items should we discuss?” Perfect for dot voting. The stakes are low, the options are roughly equivalent, and you just need to pick something so the meeting can progress.

“Which logo option do we prefer?” Also fine. Design preference across a small team, no strategic implications, move on with your day.

“What should we order for lunch?” Ideal use case.

Ideation Convergence

In design sprints, you generate dozens of ideas then need to converge. Dot voting helps the group narrow from 30 concepts to 5 concepts that deserve deeper exploration. The output isn’t a final decision—it’s a shortlist for further evaluation.

This works because you’re not actually prioritising. You’re filtering. The dots identify what’s worth discussing, not what’s worth building.

Breaking Deadlocks

When a team is stuck in circular debate, dot voting can break the impasse. “We’ve been arguing about this for 40 minutes. Let’s vote and move on.” Sometimes you need a decision more than you need the optimal decision.

When Dot Voting Fails

All Votes Are Equal (When They Shouldn’t Be)

A junior designer’s dot counts the same as the CEO’s. A new hire’s dot counts the same as the architect who understands technical dependencies. The PM with deep customer research has the same voting power as the PM who joined last week.

This sounds democratic, but it ignores that some people have more context than others. The architect might know that Option A creates six months of technical debt. The customer researcher might know that Option B solves a problem customers don’t actually have. Their dots can’t carry that information.

Contrast this with RICE or BRICE , where the scoring dimensions—reach, impact, confidence, effort—can be informed by whoever has the relevant expertise. The engineer informs effort estimates. The PM informs reach data. The framework captures structured input; dot voting captures undifferentiated preference.

No Effort Dimension

A 12-month project requiring three squads competes equally with a 2-week project requiring one engineer. Both get judged on “do people want this?” rather than “is this worth the investment?”

A feature that 8 people dot-vote might deliver marginal value at massive cost. A feature that 2 people dot-vote might be a quick win that unblocks £500K in pipeline. Dot voting can’t distinguish these cases.

Every serious prioritisation framework divides by effort for exactly this reason. Value matters. But value per unit of effort is what determines whether something deserves roadmap space.

Clustering and False Consensus

Watch what happens when dot voting runs in public: people cluster.

Early votes anchor later votes. Nobody wants to be the lone dot on an unpopular option. Participants watch where dots accumulate and pile on to the “winning” options. The result looks like strong consensus, but it’s actually herding behaviour.

Controversial options get avoided. That infrastructure investment that’s genuinely important but boring? Zero dots. That bold strategic bet that the CEO mentioned she liked? Cluster of dots from people reading the room.

You end up with safe, uncontroversial choices—which are often exactly the wrong choices for a roadmap trying to create competitive advantage.

Political Bloc Voting

In cross-functional groups, teams vote as blocs:

  • The sales team dots the features their biggest prospects want
  • Engineering dots the technical debt items
  • Marketing dots the launch features that make their campaigns easier
  • Everyone dots the CEO’s pet project to stay in favour

The “winning” options reflect political coalition strength, not business value. The loudest or largest constituency wins, regardless of whether their priorities serve the overall business.

The Benevolent Dictatorship Alternative

Here’s an uncomfortable truth: product prioritisation isn’t supposed to be democratic.

Democracy works when all participants are equally affected by the outcome and have equal information. Product prioritisation fails both tests. The CEO has different context than the junior PM. The customer researcher has data that Engineering doesn’t. Strategic priorities known to leadership aren’t visible to everyone placing dots.

A benevolent dictatorship—where an informed product leader makes the call, explains the reasoning, and takes accountability for outcomes—usually produces better roadmaps than majority voting.

This doesn’t mean ignoring input. It means:

  • Gathering structured input through prioritisation frameworks
  • Synthesising that input with strategic context
  • Making a decision and owning it
  • Explaining the reasoning so others understand even if they disagree

Dot voting abdicates this responsibility. It lets leaders hide behind “the team decided” rather than owning the prioritisation rationale.

Dot Voting vs 100 Dollar Test

If you need participatory input for alignment purposes, the 100 Dollar Test is better than dot voting.

Dimension Dot Voting 100 Dollar Test
Conviction visibility Binary (dot or no dot) Granular (£5 vs £50 allocation)
Trade-off forcing Weak (3-5 dots feels abundant) Strong (£100 must be allocated)
Clustering risk High (public placement) Lower (private allocation possible)
Information density Low (just counts) Higher (allocation patterns reveal preference strength)

The 100 Dollar Test forces real trade-offs (you can’t put £100 on everything) and reveals conviction (£60 on one option vs £15 spread across four). Dot voting tells you what people prefer; the 100 Dollar Test tells you how strongly.

Neither is actual prioritisation—they’re both preference-surfacing tools—but if you must use one, the 100 Dollar Test produces more useful information.

What to Use Instead

For actual roadmap prioritisation, use frameworks that account for the dimensions that matter:

RICE — Reach × Impact × Confidence ÷ Effort. Works when you have data on how many users are affected and can estimate impact magnitude.

BRICE — Adds Business Importance to RICE. Use when you need strategic alignment on top of user-level impact.

ICE — Impact × Confidence × Ease. Faster than RICE, good for startups moving quickly with incomplete data.

PIE — Potential × Importance × Ease. Designed for growth experiments and A/B test prioritisation.

These frameworks produce scores you can rank, discuss, and defend. They make the reasoning visible. When someone challenges a prioritisation decision, you can point at the inputs and have a productive conversation about whether Reach is really 5,000 users or whether Effort is really Medium.

Dot voting gives you a number with no reasoning attached. “This option got 12 dots” tells you nothing about why it should get roadmap space.

Practical Guidelines

If you’re facilitating a workshop and tempted to use dot voting:

Ask: is this actually a prioritisation decision? If you’re prioritising work that will consume engineering capacity and affect roadmap commitments, don’t dot vote. Use a real framework.

Ask: are the options roughly equivalent in effort? If one option is a 2-week project and another is a 6-month initiative, dot voting will produce nonsense. The 6-month initiative needs 10× more dots to be worth the investment, and voters don’t think that way.

Ask: does everyone have equal context? If some participants know things others don’t—customer research, technical constraints, strategic priorities—their information gets lost in undifferentiated dots.

If you must dot vote: Use it to shortlist, not to decide. “These 5 options got the most dots—now let’s RICE-score them to determine actual priority.” Dot voting as filter; framework as decision.

The Bottom Line

Dot voting is democratic theatre that produces popularity rankings, not priority rankings.

Use it for retros, design sprint convergence, and low-stakes facilitation where you need a quick decision and the quality of that decision doesn’t much matter.

Keep it away from your roadmap. Objective prioritisation requires weighing business value, strategic alignment, effort, reach, and confidence. Dot voting compresses all of that into “how many people put a sticker here”—which tells you nothing useful about what deserves your team’s capacity.

A benevolent dictatorship—an informed product leader making reasoned decisions with transparent frameworks—beats democracy every time. Own your prioritisation. Show your reasoning. Use the dots for choosing lunch.

References