Your PMO Isn't Failing — It's Probably Measuring the Wrong Things

Alex2026-03-31PMO & Delivery
Your PMO Isn't Failing — It's Probably Measuring the Wrong Things thumbnail

Your PMO Isn't Failing — It's Probably Measuring the Wrong Things

An short read for PMO leaders, transformation leads, and senior delivery managers who are tired of defending value they know exists but cannot seem to prove.

There is a conversation that happens in boardrooms, steering committees, and awkward post-programme reviews with uncomfortable frequency.

Someone — usually a sceptical CFO or a newly appointed COO — asks a deceptively simple question:

“What does the PMO actually do for us?”

And the PMO lead, armed with a carefully assembled pack of RAG status reports, governance compliance scores, milestone dashboards and benefit trackers, begins to answer. The slides are clean. The data is accurate. The room nods politely.

And yet the question never quite goes away.

It returns at the next budget cycle. It resurfaces when a programme runs late despite impeccable reporting. It comes back with fresh urgency when a strategic initiative fails not because nobody tracked it, but because nobody made the right call at the right time — and the PMO, for all its diligence, did not help them do that.

The problem is not that PMOs lack value.

The problem is that many PMOs are measuring — and therefore managing — the wrong things.

The assurance trap

Over the past two decades, the PMO has gradually evolved into something it was never meant to be: a compliance engine.

Ask most PMOs what they are measured on and you will hear familiar answers. Reporting timeliness. RAG accuracy. Governance attendance. Risk log completeness. Milestone adherence. Stage gate compliance. Benefits tracking.

None of these are trivial. In many organisations they are necessary. Boards need visibility. Regulators need evidence. Audit functions need control. Delivery teams need discipline. A PMO that cannot provide assurance is not mature; it is just improvising with spreadsheets and hope.

But assurance is not the same as impact.

Assurance tells you whether the right processes were followed. Impact tells you whether the right outcomes were achieved. And somewhere along the way, many organisations have confused the two — rewarding PMOs for the former while expecting them to deliver the latter.

That distinction matters.

A portfolio can be 100% governance-compliant and still deliver little of strategic value. A programme can hit every milestone and still solve the wrong problem. A transformation can be governed flawlessly and still fail to change anything meaningful about how the organisation operates.

If your PMO’s core measures would survive all of those scenarios unchanged, then you are not measuring value. You are measuring administrative hygiene.

Necessary, yes. Sufficient, no.

What gets measured gets managed — and then gets gamed

A common management maxim says that what gets measured gets managed. The less glamorous truth is that what gets measured also gets optimised for, distorted, and occasionally dressed up like success wearing a fake moustache.

In PMO terms, this shows up in predictable ways.

Project managers learn to write RAG statuses that are technically accurate but strategically misleading. Amber becomes a holding colour — honest enough to signal concern, safe enough to avoid escalation. Green persists long after common sense would challenge it, because the cost of turning something red usually falls on the person reporting it.

Risk logs become filing exercises. Risks are captured, owners assigned, dates refreshed — and yet nobody honestly believes the organisation is becoming more risk-aware. It is risk documentation, not risk management.

Governance meetings proliferate. Steering committees create working groups, which create sub-groups, each with its own reporting cycle. The organisation becomes exceptionally well-informed about project activity and remarkably slow at deciding what to do about it.

This is not usually a failure of integrity. It is a rational response to a measurement system that rewards reporting discipline more than decision confidence.

People optimise for what they are judged on.

The real question is whether they are being judged on the right things.

The shift PMOs need to make

The question is not whether the PMO should assure delivery hygiene.

It should.

The question is whether hygiene alone is enough to justify strategic investment.

If the PMO is to remain relevant over the next decade, it cannot just be a reporting factory. It has to become a portfolio intelligence function — one that improves the quality of decisions, raises confidence in delivery, and maintains a line of sight to organisational outcomes.

That requires different measures.

The three metrics that actually matter

1. Decision quality

The most valuable thing a PMO can do is make it easier for the right people to make better decisions, faster.

Not more decisions. Better ones.
Not faster governance. Faster clarity.

That means asking different questions.

Not: Did the steering committee receive the report on time?
But: Did the steering committee have the information needed to make a confident call?

Not: How many risks sit on the register?
But: How many material risks were escalated, acted on, and resolved?

Decision quality is harder to measure than reporting timeliness, but it is not impossible. A PMO can track:

These are not vanity measures. They tell you whether governance is creating movement or just paperwork.

A PMO that improves decision quality is worth its weight in avoided disasters. A PMO that produces immaculate reports nobody acts on is an overhead with good formatting.

2. Delivery confidence

There is a difference between a project being reported as on track and a project being genuinely likely to succeed.

Reporting discipline tells you the former. Delivery confidence speaks to the latter.

Delivery confidence is forward-looking. It asks: based on what we know today — about capability, complexity, dependencies, sponsor engagement, change readiness and delivery maturity — how likely is this initiative to achieve its intended outcome?

That is a more difficult conversation because it forces organisations to confront uncertainty honestly. It requires PMO leaders to move beyond historical reporting and make structured judgements about the future.

It also requires the courage to say things like:

“This programme may not fail because the plan is wrong. It may fail because the business is not ready, the dependencies are unstable, and the sponsor has already mentally moved on.”

Those factors rarely show up neatly in a status deck. They matter anyway.

A PMO can make delivery confidence measurable by using:

This is where mature PMOs become genuinely useful. They do not just report what teams say. They triangulate what is likely.

3. Organisational outcomes

This is the hardest shift, and the most important one.

A PMO should ultimately be judged not only on whether projects were delivered on time and on budget, but on whether the portfolio of change produced the outcomes the organisation invested in.

Did the capability get adopted?
Did the benefit materialise?
Did the operating model change?
Did performance improve?
Is the organisation measurably better because this work happened?

Those are the questions executives actually care about, even when they pretend to care only about milestone variance.

And yes, there is a fair objection here: PMOs do not own outcomes in isolation. Outcomes sit with business leaders, sponsors, operational teams and adoption owners. That is true.

But PMOs can absolutely own the visibility, discipline, escalation routes, dependency management and portfolio memory that make outcomes more likely — and that expose when outcomes are drifting long after delivery teams have packed up their laptops and wandered off.

A PMO that tracks organisational outcomes might measure:

That kind of longitudinal view is rare. It is also where the real value lives.

A PMO that maintains that line of sight becomes a strategic asset. It is no longer just managing calendars and templates. It is helping the organisation learn what actually works.

Why this has not changed yet

If this argument is broadly right, why do so many PMOs still measure what they measure?

It is rarely because nobody has noticed. Most experienced PMO leaders know exactly what this problem looks like. They have lived inside it.

The reasons are structural.

First, compliance is safer than outcomes. Compliance creates visible artefacts, auditable controls and defensible evidence. Outcome conversations are messier. They expose weak sponsorship, poor prioritisation, bad business cases and benefits nobody was serious about realising in the first place.

Second, outcomes take longer. Benefits rarely materialise neatly within a programme lifecycle. By the time the real effect becomes visible, funding has closed, the team has dispersed, and leaders have moved on to the next urgent thing.

Third, legacy measures have institutional momentum. They were introduced for sensible reasons. They satisfy governance expectations. They support audit and assurance requirements. Replacing them does not just require a new dashboard. It requires a new story.

And fourth, many PMOs have built their own identity around control. Being the guardian of process can feel safer than being the challenger of value. Control is clear. Strategic influence is exposed. One protects the PMO’s legitimacy. The other tests it.

None of this is insurmountable. But it does mean the shift will not happen by accident.

The conversation worth having now

Before your next planning cycle, budget discussion or leadership review, have a different conversation with your executive team.

Do not start by asking what reports they want.

Ask what decisions they find hardest to make well.

Ask which programmes genuinely worry them — not the ones marked amber, but the ones they suspect are quietly heading for trouble.

Ask where they feel least confident: prioritisation, trade-off decisions, dependency visibility, benefits realisation, sponsor alignment, or organisational readiness.

Ask what good looks like at portfolio level twelve months from now, not just what “on track” looks like this month.

Then look honestly at whether your current PMO measures would help you get there.

In many organisations, the answer will be no.

Not because the PMO lacks capability. But because its measures were designed for a narrower purpose. They were designed to assure, not to enable. To document, not to decide. To report, not to change.

What the future PMO looks like

The PMO that earns its place in the next phase of organisational life will not be the one with the neatest templates or the most punctual reports.

It will be the one that helps leaders make better calls under pressure.
The one that surfaces delivery truth before the dashboard catches up.
The one that keeps sight of outcomes after the implementation applause dies down.
The one that turns portfolio data into portfolio judgement.

That PMO is not anti-governance.

It is simply not satisfied with governance as the final proof of value.

A PMO that measures compliance may survive.

A PMO that improves decisions becomes difficult to replace.

A final thought

If you asked your most senior stakeholders to describe the PMO’s value in one sentence, what would they say?

Would they describe a function that helps them make better decisions?

Or one that produces reliable reports?

Both answers may be honest.

Only one sounds strategically indispensable.

And that is the real test.


If this sparked agreement, frustration, or a counterargument you have been making for years, post it. The most useful PMO conversations usually begin when someone is willing to challenge what everybody else has quietly accepted.

Your PMO Isn't Failing — It's Measuring the Wrong Things