Why Your Program Health Metrics Are Lying to You
A program showed green across the board for six months.
Every status meeting: green. Every dashboard review: green. Every stakeholder update: green. The TPM was confident. The EM was aligned. The timeline looked achievable. Then, in week 23, the security review that "should have been fine" flagged architectural issues that required a full rebuild. The program missed its launch date by seven weeks.
How did the dashboard miss this?
It didn't. The dashboard was measuring the right things — whether work was being completed on schedule, whether scope was being adhered to, whether milestones were being hit. What it wasn't measuring: whether the work that was being completed was the right work, whether the assumptions that underlay the architecture were still valid, whether the team had stopped flagging concerns because they'd learned it didn't matter.
The dashboard wasn't lying. It was measuring the wrong thing. And by the time the gap became visible in the dashboard, there was no time left to do anything about it.
Why Dashboards Lie
The core problem with program health metrics: they measure activity and compliance, not actual program health.
RAG statuses, milestone trackers, traffic lights — these tools were designed to create shared visibility. What they actually create is a political instrument.
The person coloring the dashboard has incentives. Red means accountability they may not want. Green means no escalations. Yellow is the "safe" choice that preserves optionality. The color you see is often the color the TPM or EM needed it to be, not the color reality demands.
This isn't necessarily conscious. Nobody sets out to lie on a dashboard. But when the pressure is on, when the timeline is immovable, when the stakeholder presentation is next week — it's easier to color it yellow and hope the problem resolves itself.
The result: dashboards are consistently wrong at the moments that matter most. A program can show green across the board on Tuesday and be in crisis by Thursday. The dashboard didn't fail to capture reality. It was measuring activity, not health.
Activity Metrics vs. Health Metrics
Completion percentages, milestone hit rates, scope adherence — these tell you whether the plan is being executed, not whether the plan was right.
A program can be 100% on plan and heading for a cliff if the underlying assumptions were wrong. The work was completed. It just wasn't the right work.
This is the fundamental limitation of activity metrics: they're always measuring the past. They tell you what happened, not what's about to happen. They measure execution, not the conditions that determine whether execution will succeed.
Health metrics — actual program health — are different. They're about the conditions that determine whether the program will succeed going forward: Is the team engaged or burned out? Are stakeholders aligned or resigned? Are assumptions still valid or have they been invalidated by changes elsewhere?
These conditions don't show up in dashboards. They show up in behavior.
The Behavioral Signal Framework
Here's what nobody teaches TPMs about program health: the first signs of trouble are always behavioral, not numerical.
Before any metric turns red, something changes in how people communicate. An EM who used to push back on scope goes quiet. A stakeholder who was engaged stops coming to syncs. A team that used to surface blockers proactively starts finding workarounds without telling anyone.
These signals appear three to six weeks before the dashboard reflects a problem. They're the real leading indicators.
The problem: they require attention and interpretation. You can't automate "the EM who stopped arguing." You have to notice it, which means you have to be paying attention, which most TPMs aren't — because they're paying attention to the dashboard.
Signal 1: The Compliance Cascade
The first behavioral shift to watch: when the EM who used to push back on scope starts agreeing to everything.
On the surface, this looks like alignment. "Great news — the EM is finally on board." In reality, it might be the opposite.
When an EM stops arguing, one of two things has happened: they've genuinely become aligned — or they've stopped believing it matters. The EM who concluded that arguing gets overridden anyway, that their judgment isn't valued, that the decision will be made regardless — that EM stops pushing. And then their team follows their lead.
This is the compliance cascade: the EM's resignation becomes the team's resignation. The program looks green because nobody's fighting for the scope anymore. The TPM thinks things are going smoothly. The failure compounds invisibly.
How to verify: ask directly. "I've noticed you've been less pushback lately — has your perspective on the scope changed, or is there something we should revisit?" This gives the EM an opening to surface concerns without feeling like they're breaking ranks.
Signal 2: The Engagement Drop
Stakeholder engagement is a proxy for trust.
When stakeholders start attending less frequently, when they stop asking questions, when they go from "wants updates" to "sends delegates" — pay attention.
This is the most consistently missed leading indicator in program health. The TPM who notices "the VP used to attend every review and now they send their director" is seeing a health signal, not a scheduling observation.
Why does engagement drop? Usually one of two reasons: either the stakeholder has become appropriately confident and trusts the TPM to run the program — or they've become resigned. They've decided that attending won't change anything, that their input won't matter, that the outcome is inevitable.
The difference between trust and resignation looks identical from a distance. The TPM who can't tell which one they're seeing needs to ask.
How to verify: ask about the last decision the stakeholder made. If they don't know, or if they describe it with ownership rather than resignation, trust may actually be the issue. If they've stopped engaging with decisions, resignation is more likely.
Signal 3: The Workaround Underground
When teams stop raising blockers and start quietly working around problems — this is worse than visible blockers.
Visible blockers can be managed. A blocker on the dashboard gets attention, resources, escalation if needed. The workaround underground gets none of this. The team just... works around it. The work continues. The dashboard stays green.
But the workaround has a cost: it's usually slower than the right path, it's usually more fragile, and it usually accumulates debt. The team is patching the symptom rather than fixing the problem, and nobody knows because nobody's asking.
The TPM who notices "this team seems to always find a way to keep moving" might be seeing resilience — or might be seeing the early stages of an accumulating failure mode that will surface when the workaround runs out.
How to verify: ask about workarounds directly. "What are you working around right now that you wish you didn't have to?" The answer — or the deflection — tells you something.
Signal 4: The Quiet Resignation
The stakeholder who stops asking questions.
Not "what's the status" questions — those happen in every status meeting. The questions that stop are the challenging ones: the pushback, the "have you considered," the "what about X."
When stakeholders stop pushing, the conventional reading is "they've become comfortable with the direction." But the more likely explanation is often "they've given up."
This is the quiet resignation: not the dramatic exit, but the slow withdrawal of engagement. The TPM who notices "they used to push back on this, now they just nod" is seeing the first structural crack in the program's political support.
How to verify: ask a direct question about the program's direction. "If we had to make a call on X right now, would you be comfortable with that?" If the answer is "sure" without engagement, that's resignation. If the answer comes with caveats and questions, that's appropriate caution.
The Verification Protocol
Behavioral signals are observations, not conclusions. The TPM who treats every shift as evidence of impending failure becomes the TPM who cries wolf.
The verification protocol: turn observations into questions before you turn them into actions.
For the quiet EM: "I've noticed you've been less feedback on scope — is there anything you think we should revisit?"
For the disengaged stakeholder: "I saw you weren't at the last two syncs — is there something we should be covering that we're not?"
For the workaround underground: "What are you working around that you wish you didn't have to?"
For the quiet resignation: "If we had to make a hard call on this today, would you be comfortable?"
The answer — and the way it's given — tells you whether your observation was valid. Sometimes it wasn't. Sometimes the EM is legitimately aligned. Sometimes the stakeholder is appropriately hands-off. Verify before you act.
What Most TPMs Get Wrong
Treating the dashboard as the program. The dashboard measures the past. Your job is to read the present.
Color-coding as truth. The color in the dashboard reflects incentives as much as reality. When it looks too green, ask why.
Ignoring behavioral shifts because there's no metric for them. The EM who stopped arguing isn't a metric. It's a signal. Pay attention to things that don't show up in the dashboard.
Acting on observations without verifying them. The TPM who tells leadership "the EM is disengaged" without verification looks paranoid. The TPM who asks "I've noticed less pushback — is there something we should discuss?" looks like a leader.
The Question to Ask Before Every Status Meeting
Before you walk into any status meeting, ask yourself: what would the dashboard have to show for the program to actually be in trouble three weeks from now?
If the answer is "nothing — the dashboard would still look fine" — that's when you need the behavioral signals most. The program can be heading for failure while the dashboard shows green. Your job is to notice before the dashboard does.
The dashboard is a record of the past. Your job is to read the present.
Member discussion