← All Blog Articles

PULSE Framework: The Outdated Metrics Model Your Dashboard Might Still Follow

If Your Metrics Look Like This, You're Missing the User

(updated Jan 24, 2026)
PULSE Framework: The Outdated Metrics Model Your Dashboard Might Still Follow

This is one of RoadmapOne ’s articles on Objective Prioritisation frameworks .

PULSE—Page views, Uptime, Latency, Seven-day active users, Earnings—predates HEART . It’s the metrics framework Google teams used before Kerry Rodden’s group proposed a more user-centred alternative.

I’ve never seen PULSE formally adopted—it’s largely a historical artifact. But I’ve seen countless dashboards that accidentally look exactly like PULSE: revenue charts, uptime monitors, page view graphs, and maybe a DAU number. Business metrics and technical health, with users nowhere to be found.

PULSE is outdated as a recommendation. It’s useful as a diagnostic: if your current metrics match PULSE’s dimensions, you’ve built a business-centred dashboard, not a user-centred one.

TL;DR

PULSE (Page views, Uptime, Latency, Seven-day active users, Earnings) captures business outputs and technical health—not user experience. Google replaced it with HEART for good reason. Use PULSE as a thinking tool: if your dashboard has revenue, uptime, and page views but nothing about user satisfaction, task completion, or engagement depth, you’ve accidentally built PULSE. Fix it with HEART.

The Five Dimensions

Page Views

How many pages did users view? This was the default web metric in the early 2000s—easy to measure, hard to interpret.

Page views tell you about traffic volume but nothing about value. A user rage-clicking through broken navigation generates page views. A user who finds what they need immediately generates fewer page views but got more value.

Uptime

What percentage of time was the system available? This is an engineering metric, critical for operations but silent on user experience.

99.9% uptime is great, but it doesn’t tell you whether users accomplished their goals during that uptime. A perfectly available system that’s unusable isn’t success.

Latency

How fast did pages load? Another engineering metric—important for user experience, but a proxy rather than a direct measure.

Low latency enables good experience; it doesn’t guarantee it. A page that loads in 200ms but confuses users has solved the wrong problem.

Seven-Day Active Users

How many users were active in the past week? This is the only PULSE dimension that approximates user behaviour, and it’s crude.

Seven-day active tells you nothing about what users did, whether they accomplished their goals, or whether they’re satisfied. It’s a count, not a measure of value.

Earnings

How much revenue did we generate? The ultimate lagging indicator—important for business health, disconnected from user experience.

Revenue confirms that some users found enough value to pay. It doesn’t tell you about the users who didn’t convert, the users who churned, or the users who paid but resent your product.

Why PULSE Falls Short

PULSE’s dimensions cluster into two categories:

Business outputs: Page views, Earnings—measures of business performance Technical health: Uptime, Latency—measures of system reliability

Neither category tells you about users:

  • Are users satisfied? (Not measured)
  • Can users accomplish their goals? (Not measured)
  • Are users engaged deeply? (Barely measured via 7-day active)
  • Are new users successfully adopting? (Not measured)
  • Are users being retained? (Partially inferred from 7-day active)

This is why Google’s UX team developed HEART —to shift from business and technical metrics to user-centred metrics.

PULSE as a Diagnostic

The value of knowing PULSE isn’t to adopt it—it’s to recognise when you’ve accidentally built it.

Audit your current dashboard. What metrics do you track? If the list looks like:

  • Revenue / MRR / ARR
  • Page views / Sessions
  • Uptime / Availability
  • Response time / Latency
  • DAU / WAU / MAU

…you’ve built a PULSE dashboard. You’re measuring business outputs and technical health. You’re not measuring user experience.

Ask what’s missing. Using HEART as a checklist:

  • Happiness: Do you measure satisfaction? (NPS, CSAT, surveys)
  • Engagement: Do you measure depth of usage? (Feature usage, session depth)
  • Adoption: Do you measure onboarding success? (Activation rate, time to value)
  • Retention: Do you measure continued usage? (Churn, cohort retention)
  • Task Success: Do you measure goal completion? (Completion rates, error rates)

If you’re missing most of these, your metrics are PULSE-shaped even if you’ve never heard of PULSE.

When PULSE-Style Metrics Are Appropriate

There’s one context where PULSE dimensions genuinely matter: infrastructure and platform products.

If you’re selling an API, a database service, or a cloud platform, your users do care deeply about uptime and latency. These aren’t proxies for user experience—they are the user experience. A developer integrating your API cares whether it’s available and fast.

Even then, you’d want HEART dimensions layered on top:

  • Task Success: Can developers successfully integrate?
  • Happiness: Do developers enjoy working with your product?
  • Adoption: Are new developers activating successfully?

PULSE metrics are necessary but not sufficient, even for infrastructure products.

The Bottom Line

PULSE is outdated—don’t adopt it. But understand it, because your dashboard might accidentally follow the same pattern: business metrics and technical health, with users missing entirely.

Use PULSE as a diagnostic. If your metrics look like Page views, Uptime, Latency, Seven-day active, Earnings—you’ve built a PULSE dashboard. Fix it by adding HEART dimensions: Happiness, Engagement, Adoption, Retention, Task Success.

User-centred metrics tell you whether users are getting value. Business metrics tell you whether you’re capturing value. You need both, but most organisations over-index on business metrics because they’re easier to measure and more familiar to leadership.

HEART replaced PULSE for good reason. Make sure your dashboard reflects the upgrade.

References