Objective Prioritisation: PIE
Potential, Importance, Ease—Prioritisation for Growth Teams Who Test Everything
This is one of RoadmapOne’s articles on Objective Prioritisation frameworks .
Growth teams don’t build features—they run experiments. Every A/B test, landing page variation, and onboarding tweak is a bet: “If this works, what’s the upside?” Most prioritisation frameworks optimise for reach or value. PIE—Potential, Importance, Ease—optimises for experimental upside. Created by Chris Goward for conversion rate optimisation, PIE asks three questions: What’s the potential improvement if this succeeds? How important is the page or flow we’re optimising? How easy is this to implement and test?
The formula is simple: Potential × Importance × Ease, where each dimension is scored 1-10. High PIE scores indicate experiments with massive upside potential, targeting critical pages, that you can ship quickly. Low PIE scores flag speculative tests on unimportant pages requiring heroic effort. PIE was born in conversion optimisation but has spread to growth teams, product teams, and anyone who treats roadmaps as hypothesis portfolios rather than feature factories.
TL;DR: PIE prioritises by multiplying Potential (upside if successful), Importance (criticality of what you’re optimising), and Ease (implementation speed). It excels at growth experimentation, A/B test prioritisation, and fast-moving teams testing hypotheses. But PIE is nearly identical to ICE, sacrifices rigour for speed, and only works if you’re in an experimentation mindset.
The Three Dimensions of PIE
PIE scores every objective or experiment across three dimensions: Potential, Importance, and Ease. All three use 1-10 scales, making scoring fast and intuitive. The formula multiplies them together, producing scores from 1 (terrible bet) to 1,000 (perfect experiment). High scores mean big potential upside on critical pages that you can test quickly.
Potential: What’s the Upside If This Works?
Potential measures the improvement magnitude if this experiment succeeds or this feature delivers perfectly. It asks: “If everything goes right, how much better could this page, flow, or feature perform?” A 10 on Potential means you could double conversion rates, halve churn, or transform a broken experience into a delightful one. A 1 means even if successful, the needle barely moves.
Potential is fundamentally optimistic—it’s about maximum upside, not expected value. You’re not asking “What’s the likely outcome?” (that’s Impact in RICE). You’re asking “What’s possible if this hypothesis is correct?” That checkout flow currently converting at 2%—could a redesign hit 6%? That’s 3× improvement, scoring 9-10 on Potential. That button colour test on a high-converting page? Maybe 5% uplift max—scores 3-4.
Potential forces growth teams to distinguish between incremental tweaks and transformational bets. Most A/B tests are incremental (Potential 3-5). Occasionally you find a leverage point where the upside is massive (Potential 8-10). PIE systematically prioritises those high-leverage experiments over low-ceiling optimisations.
The trap is Potential inflation. Every growth hacker believes their test could 10× metrics. The fix is calibration: track estimated Potential versus actual results. If your “Potential 9” tests consistently deliver 10% lifts, your calibration is broken. Adjust your scale or admit that 10% is your realistic ceiling for most experiments.
Importance: How Critical Is What You’re Optimising?
Importance measures how strategically significant the page, flow, or feature you’re optimising is to your business. It’s not “how important is this test?"—it’s “how important is the thing being tested?” Optimising your homepage (where 80% of traffic lands) scores 10 on Importance. Optimising a forgotten settings page with 50 daily visitors scores 1.
Importance in PIE plays the same role as Reach in RICE—it’s a traffic or impact multiplier. A mediocre experiment (Potential 5) on a critical page (Importance 10) might score higher than a brilliant experiment (Potential 9) on a low-traffic page (Importance 3). Importance ensures you’re optimising the parts of your product that matter most to business outcomes.
Importance captures leverage beyond traffic volume. Your pricing page might have moderate traffic but is existentially important to revenue. Your onboarding flow might be short but determines whether users activate. These score high on Importance even if traffic is lower than your blog. Importance is about business criticality, not just vanity metrics.
The danger is everything scoring 10. Product managers claim every page is “critical.” The cure is forced distribution: only 20% of pages can score 9-10 on Importance. Reserve top scores for pages that directly drive your North Star metric. Everything else scores 1-7 based on relative significance.
Ease: How Quickly Can We Test This?
Ease measures implementation speed and simplicity. A 10 on Ease means you can ship and test this in hours—a copy change, button colour swap, or simple A/B test. A 1 means this requires weeks of engineering—new backend logic, complex integrations, or risky refactors. Mid-range Ease (5-6) represents moderate dev work shippable in a sprint.
Ease is PIE’s speed multiplier. Growth teams running dozens of experiments quarterly can’t afford slow tests. Ease ensures Quick Wins (high Potential, high Importance, high Ease) rise to the top. A hypothesis with Potential 8, Importance 7, but Ease 2 scores 112—lower than a hypothesis with Potential 6, Importance 6, Ease 8 scoring 288. The faster test wins because velocity compounds learning.
Ease also captures risk. Tests requiring production database changes or complex logic aren’t just slow—they’re risky. Bugs could break checkout. Rollbacks could lose data. Ease scores implicitly penalise risky experiments, biasing teams toward safe, reversible tests. That’s often correct for growth experimentation where learning velocity beats swing-for-the-fences bets.
The trap is Ease inflation. Teams underestimate complexity because “it’s just a small test.” Then the test breaks mobile, fails QA, or requires three rounds of design iteration. Track estimated Ease versus actual delivery time. If your Ease 8 tests consistently take two weeks, recalibrate. Ease estimation is as hard as effort estimation—don’t pretend otherwise.
The PIE Formula in Action
The PIE formula multiplies the three dimensions: Potential × Importance × Ease. Scores range from 1 to 1,000. Let’s see how it prioritises experiments.
Consider three potential tests for a SaaS growth team:
Experiment A: Redesign onboarding to reduce early churn
- Potential: 8 (could halve churn if we fix the confusion points)
- Importance: 10 (onboarding determines activation and retention)
- Ease: 3 (requires design, engineering, and multi-step flow changes)
- PIE Score: 8 × 10 × 3 = 240
Experiment B: A/B test homepage hero copy
- Potential: 5 (messaging improvements might boost signup conversion 10-20%)
- Importance: 9 (homepage is top traffic source)
- Ease: 10 (copy change, one-hour test setup)
- PIE Score: 5 × 9 × 10 = 450
Experiment C: Add social proof badges to rarely-used feature page
- Potential: 6 (social proof often lifts conversion meaningfully)
- Importance: 2 (page gets 100 visitors/month, not strategic)
- Ease: 8 (design asset + simple code change)
- PIE Score: 6 × 2 × 8 = 96
Experiment B wins despite moderate Potential because it’s testing something critical (homepage) and trivially easy (copy change). Experiment A has high Potential and Importance but low Ease tanks its score—it’s the right bet long-term but not the right next bet. Experiment C is easy but unimportant—classic waste of growth team time.
This is PIE’s philosophy: prioritise learning velocity on high-leverage pages. The fastest tests on the most important pages accumulate wins that fund slower, riskier experiments. Experiment B ships this week, proves the hypothesis, and builds credibility for Experiment A next quarter.
PIE vs ICE—Are They the Same Framework?
PIE and ICE are nearly identical. ICE is Impact × Confidence × Ease. PIE is Potential × Importance × Ease. Both use 1-10 scales, both multiply three dimensions, both prioritise quick wins. The differences are subtle but meaningful.
Potential vs Impact: ICE’s Impact is “How much does this move the metric?” PIE’s Potential is “What’s the upside if this works perfectly?” Potential is more optimistic—asking about ceiling, not expected value. For growth teams running many tests where most fail, optimising for upside potential makes sense. For product teams shipping features that must work, Impact (expected value) is safer.
Importance vs Confidence: ICE includes Confidence (how sure are we?). PIE replaces it with Importance (how critical is this page/flow?). PIE assumes growth teams test everything anyway—Confidence is irrelevant if you’re running 50 tests quarterly. Importance ensures you’re testing leverage points, not backwater pages. ICE’s Confidence matters more for product teams committing months to features that can’t easily be tested.
When to Use Which: Use ICE for product roadmaps where you’re building features with uncertain outcomes and Confidence scoring helps manage risk. Use PIE for growth experimentation where you’re testing hypotheses quickly and Importance ensures you test what matters most. If your culture is “ship and measure,” use PIE. If your culture is “validate before building,” use ICE.
When PIE Is Your Best Weapon
PIE excels in three contexts. First, growth teams running continuous experimentation. If you’re A/B testing landing pages, onboarding flows, pricing pages, and email campaigns, PIE provides fast prioritisation aligned with learning velocity. Score Potential-Importance-Ease in 30 seconds per test, sort by PIE score, run the top 10. Re-score weekly as results arrive.
Second, conversion rate optimisation and marketing teams. PIE was born in CRO for a reason—it prioritises tests on high-traffic pages with meaningful upside that you can run quickly. CRO teams love PIE because it matches their workflow: identify leverage points (Importance), estimate improvement potential (Potential), check implementation cost (Ease), and ship.
Third, product teams in hyper-growth chaos. Pre-product-market-fit startups pivoting rapidly don’t have data for RICE. They have hypotheses, urgency, and limited eng capacity. PIE is lightweight enough to score 50 ideas in an hour, surface quick wins on critical flows, and avoid analysis paralysis. Speed beats precision in early-stage chaos.
When PIE Betrays You
PIE collapses in three scenarios. First, when experimentation infrastructure is weak. PIE assumes you can test quickly and measure results. If setting up A/B tests takes two weeks, or if analytics is broken, or if statistical significance requires six months, Ease scoring becomes fiction. PIE is for teams with mature experimentation systems, not teams building them.
Second, when strategic bets require faith over testing. That transformational infrastructure rebuild scores terribly on PIE—low Potential (no immediate upside), moderate Importance (not customer-facing), low Ease (six months of work). PIE systematically deprioritises foundational bets that can’t be incrementally tested. Follow PIE blindly, and you optimise yourself into technical debt collapse.
Third, when it’s redundant with ICE. If you’re already using ICE and it works, PIE offers minimal differentiation. You’re swapping Confidence for Importance and Impact for Potential—cosmetic changes that don’t materially alter prioritisation. PIE’s value is cultural fit (growth/CRO teams) more than methodological superiority. Don’t adopt it just to have another framework.
Practical Implementation
Start by listing experiments or features to prioritise. For growth teams, this might be 20-50 potential A/B tests, landing page variations, onboarding tweaks, and email campaigns. For product teams, it might be feature ideas or enhancement proposals.
Score each on Potential (1-10: What’s the upside if this works?), Importance (1-10: How critical is what we’re optimising?), and Ease (1-10: How quickly can we ship and test this?). Don’t overthink—gut instincts are sufficient. This takes 30-60 minutes for 30 items.
Calculate PIE scores by multiplying the three dimensions. Sort by score. High PIE (400+) means meaningful upside on important pages you can test quickly—ship these immediately. Medium PIE (200-400) means reasonable bets—fund after Quick Wins. Low PIE (<200) means speculative tests on unimportant pages or hard-to-build experiments—deprioritise or kill.
Fund the top 10-15 PIE scores based on team capacity. Run the experiments, measure results, and update your Potential and Importance calibration based on what actually moved metrics. That Potential 8 test that delivered 3% lift? Recalibrate your scale. That Importance 10 page that didn’t impact North Star? Recalibrate what “important” means.
Re-score weekly or bi-weekly as experiments complete. Growth teams move fast—last week’s top test already shipped and validated or failed. Continuous PIE scoring keeps the backlog fresh and ensures you’re always running the highest-leverage experiments given current knowledge.
Present PIE scores in RoadmapOne alongside experiment results. When leadership asks “Why are we testing button colours instead of rebuilding the platform?” you point at PIE scores and say “Button colour scores 450 PIE, platform rebuild scores 80. We’re optimising learning velocity and impact. The rebuild is a Q3 bet after we’ve harvested Quick Wins.”
PIE and Learning Velocity
PIE is prioritisation for teams who treat roadmaps as experiment portfolios. It optimises for learning velocity on high-leverage pages, biasing toward Quick Wins you can ship, test, and iterate on rapidly. That bias is correct for growth teams where compounding small wins beats waiting for big bets. It’s dangerous for teams where foundational work can’t be skipped.
RoadmapOne makes PIE visible at portfolio scale. Score Potential-Importance-Ease, sort by PIE, and fund what scores highest. Watch growth teams stop debating which test to run and start running tests, measuring results, and iterating. The framework creates velocity by eliminating decision paralysis.
PIE won’t fix broken experimentation infrastructure, strategic myopia, or teams that can’t ship quickly. But for growth teams with mature systems and a test-and-learn culture, it’s the fastest path from “What should we test?” to “Here are results.” That speed—from decision to data—is often the only competitive advantage that matters.
Your roadmap is a portfolio of bets. PIE helps you bet on high-upside, high-leverage, fast-to-test hypotheses. Run them, learn from them, and let results compound. Growth isn’t built on perfect roadmaps—it’s built on fast learning loops. PIE accelerates the loop.
For more on Objective Prioritisation frameworks, see our comprehensive guide .