← All Blog Articles

The 100 Dollar Test Is an Alignment Tool Disguised as Prioritisation

Use It for Buy-In, Not Backlogs

(updated Jan 24, 2026)
The 100 Dollar Test Is an Alignment Tool Disguised as Prioritisation

This is one of RoadmapOne ’s articles on Objective Prioritisation frameworks .

Hand everyone in the room £100 in fake money. List the objectives on the wall. Tell them to allocate their budget across the options—spend more on what matters most, less on what doesn’t. Collect the allocations. Sum the totals. The highest-funded objectives are your priorities.

Simple, democratic, engaging. Also: not really prioritisation.

The 100 Dollar Test surfaces preferences. It forces trade-offs. It creates conversations that wouldn’t happen in a standard backlog grooming session. But it doesn’t tell you whether the crowd’s preferences align with business value, strategic goals, or delivery reality.

TL;DR

The 100 Dollar Test is useful for stakeholder workshops when you need to surface disagreement, force trade-offs, and create buy-in. The allocation patterns reveal more than simple voting—someone putting £60 on one item tells you something different than someone spreading £10 across six. But treat the output as input to a conversation, not a definitive priority ranking. The exercise builds alignment; actual prioritisation still needs frameworks like RICE or BRICE that account for effort, reach, and confidence.

How the 100 Dollar Test Works

The mechanics are straightforward:

  1. List the options. Write each objective, feature, or initiative on a card or whiteboard section. Typically 8-15 items—enough for meaningful choice, not so many that allocation becomes overwhelming.

  2. Distribute the budget. Give each participant £100 (or $100, or 100 points—the currency doesn’t matter, the constraint does). Participants can allocate in any denomination: £50 on one item and £10 on five others, or £10 across ten items.

  3. Collect allocations privately. This is critical. If participants see others’ allocations before committing, anchoring bias takes over. Collect simultaneously or privately.

  4. Sum the totals. Add up how much each objective received across all participants.

  5. Discuss the results. This is where the real value lies—not the numbers themselves, but the conversation about why people allocated the way they did.

What the Allocation Patterns Reveal

The spread matters as much as the totals. Consider two participants who both think Objective A is important:

  • Participant 1: £60 on Objective A, £10 spread across four others
  • Participant 2: £25 on Objective A, £25 on Objective B, £25 on Objective C, £25 on Objective D

Same “vote” for Objective A. Completely different conviction level. Participant 1 has a clear priority. Participant 2 is hedging—they think A matters, but not dramatically more than B, C, or D.

This granularity is what makes the 100 Dollar Test more useful than simple voting. You don’t just see what people prefer—you see how strongly they prefer it.

Where the 100 Dollar Test Shines

Surfacing Disagreement

The most valuable output is often discovering that leadership isn’t aligned. You assumed everyone agreed that “Enterprise Features” was the top priority. The 100 Dollar Test reveals that the CEO allocated £40 to Enterprise, the CRO allocated £60 to Enterprise, but the CPO allocated £50 to Platform Stability and only £20 to Enterprise.

That disagreement would have festered silently, emerging as friction when roadmap decisions get made. The 100 Dollar Test makes it visible and discussable before you’ve committed resources.

Forcing Trade-Offs

In normal backlog conversations, everyone can claim their priority is critical. There’s no forcing function that requires choosing between competing demands.

The fixed £100 budget changes the dynamic. You cannot allocate £100 to everything. If you want £40 on Objective A, you have £60 left for everything else. The constraint makes trade-offs explicit.

This is especially valuable with stakeholders who habitually demand everything. “I need Feature X AND Feature Y AND Feature Z, all by Q2.” Give them £100 and watch them actually rank their demands.

Creating Buy-In

When stakeholders participate in prioritisation, they’re more likely to accept the outcome—even if their preferred objective doesn’t win. The exercise feels fair. Everyone had the same budget. The process was transparent.

This buy-in is often more valuable than the ranking itself. A roadmap with stakeholder commitment to the priorities delivers better than a “correct” roadmap that everyone resents.

Quick Read on Internal Priorities

Unlike Buy a Feature , the 100 Dollar Test doesn’t require pricing features based on effort. You can run it in 15-20 minutes rather than a full workshop. When you need a fast read on where a leadership team actually stands, it’s efficient.

Where the 100 Dollar Test Fails

It’s Not Actually Prioritisation

The 100 Dollar Test measures preference, not value. A feature that’s popular with stakeholders isn’t necessarily high-impact, low-effort, or strategically aligned.

Consider:

  • The sales team votes as a bloc for features their biggest prospect wants
  • The CEO’s pet project gets disproportionate allocation from people wanting to stay in favour
  • The technically important but unsexy infrastructure work gets minimal allocation

The output reflects political coalitions and personal preferences. It tells you nothing about reach, impact, confidence, or effort—the dimensions that actually determine whether an objective deserves capacity.

HiPPO Anchoring

If senior people allocate first, or discuss their thinking before others allocate, anchoring bias distorts results. Participants consciously or unconsciously adjust toward the boss’s position.

Private, simultaneous collection mitigates this—but in practice, it’s hard to enforce. The most senior person often shares “preliminary thoughts” before the exercise, and suddenly everyone’s allocation looks remarkably similar.

Gaming Is Easy

Once participants understand the mechanics, gaming is straightforward:

  • Allocate nothing to objectives you want killed (even if they have merit)
  • Pile money on your team’s objectives (even if others would deliver more value)
  • Form coalitions before the exercise to coordinate allocation

The 100 Dollar Test works best with groups who are genuinely trying to find the right answer. It works poorly with groups who are trying to win resources for their fiefdoms.

No Connection to Delivery Reality

An objective might get £400 in total allocation, making it the clear “winner.” But if it requires 12 months of engineering effort while the #2 objective requires 2 months and delivers 80% of the value, the 100 Dollar Test has led you astray.

Prioritisation frameworks like RICE divide by effort for exactly this reason. The 100 Dollar Test treats all objectives as if they cost the same to deliver, which they never do.

100 Dollar Test vs Buy a Feature

The 100 Dollar Test is often confused with Buy a Feature . They’re cousins but serve different purposes.

Dimension 100 Dollar Test Buy a Feature
Feature pricing All features equal Features priced by effort
Pooling money Individual allocation Participants can pool to buy expensive items
Best for Internal alignment Customer workshops, revealing coalition support
Time required 15-20 minutes 60-90 minute workshop
Output Relative preference Willingness to pay/collaborate

Buy a Feature prices objectives based on estimated effort, so expensive features need multiple people pooling their money. That pooling creates conversation—“I’ll contribute £30 to Enterprise SSO if you do too”—which reveals which features have coalition support and are worth the investment.

100 Dollar Test is simpler. Everyone gets the same budget, features aren’t priced, and you’re measuring relative preference rather than willingness to collaborate. It’s faster but less nuanced.

Use Buy a Feature when you want to understand customer willingness to pay or when you need stakeholders to actively negotiate trade-offs. Use 100 Dollar Test when you need a quick read on internal priorities or want to surface disagreement within a leadership team without the pricing complexity.

Practical Implementation

If you’re running a 100 Dollar Test:

Keep options to 8-15 items. Too few and the trade-offs are trivial. Too many and participants spread allocations so thin that the results are noise.

Collect allocations privately and simultaneously. Use paper slips, a digital tool, or any mechanism that prevents anchoring. Do not let participants see others’ allocations before committing.

Debrief the outliers. The interesting conversation isn’t about the #1 item—it’s about the surprises. Why did Finance allocate £40 to Platform Stability when everyone else gave it £5? Why did three people give £0 to the initiative leadership assumed was a priority?

Treat output as input. The 100 Dollar Test tells you where preferences lie. It doesn’t tell you whether those preferences are correct. Use the results to inform a prioritisation conversation, not to end one.

Don’t use it for roadmap sequencing. The 100 Dollar Test is appropriate for workshop facilitation and alignment-building. It’s not appropriate for actually deciding what gets built when. For that, use RICE , BRICE , PIE , or another quantitative framework.

When to Use the 100 Dollar Test

Context Use 100 Dollar Test? Why
Quarterly roadmap prioritisation No Doesn’t account for effort, reach, or strategic alignment
Leadership alignment workshop Yes Surfaces disagreement, forces trade-offs, creates buy-in
Customer research on feature preferences Maybe Buy a Feature is usually better
Sprint planning No Wrong altitude—use story points and sprint capacity
Annual strategy offsite Yes Good for surfacing where executives actually disagree
Backlog grooming No Too slow for regular cadence, doesn’t integrate with velocity

The Bottom Line

The 100 Dollar Test is an alignment tool disguised as prioritisation. The real value isn’t the ranked output—it’s the conversation that reveals disagreement, forces trade-offs, and builds buy-in.

Use it when you need stakeholders to actually commit to preferences rather than claiming everything is critical. Use it when you suspect leadership isn’t aligned and want to make the disagreement visible. Use it as input to a prioritisation conversation.

But don’t mistake preference for priority. The 100 Dollar Test tells you what people want. It doesn’t tell you what delivers value, what’s achievable, or what aligns with strategy. For that, you need quantitative frameworks that account for reach, impact, confidence, and effort.

The crowd might be wise. Or it might be political. The 100 Dollar Test can’t tell the difference.

References