Anish Patel

The Measurement Gap

Every metric sits somewhere on a chain: effort → output → outcome → impact.

Effort is hours worked, meetings held, code written. Output is features shipped, reports delivered, campaigns launched. Outcome is how customer behaviour changes. Impact is the value that flows back — revenue, retention, referrals.

The further along the chain, the more the metric matters. And the harder it is to measure.


The attribution trap

Impact is what you care about. But attribution is nearly impossible.

Did that feature drive retention, or was it the pricing change, or the market shift, or just luck? The honest answer is often: we don’t know. We believe the feature mattered, but we can’t prove the chain.

OKRs often become exercises in reverse-engineering attribution that doesn’t exist. Teams contort themselves to connect activity to impact, creating false precision. The quarterly review becomes a storytelling exercise rather than a learning one.

The discipline isn’t avoiding impact metrics — you need them. It’s being honest about confidence levels. Some metrics are proof. Some are signals. Some are beliefs. Know which you’re working with.


Know what you’re measuring

Most organisations don’t think consciously about where their metrics sit on the chain. They measure what’s available — whatever the CRM spits out, whatever finance has always tracked.

But each level comes with different risks:

Effort metrics (hours, activity, velocity) are easy to capture but create perverse incentives. Measure hours and you get hours. You don’t necessarily get results.

Output metrics (features shipped, deals closed) are better but still gameable. You can ship features nobody uses. You can close deals that churn.

Outcome metrics (behaviour change, adoption, satisfaction) are harder to attribute but closer to what matters. They’re worth the difficulty.

Impact metrics (revenue, profit, lifetime value) are what you actually care about — but the attribution problem means you often can’t tie them cleanly to specific actions.

The discipline is being explicit about which level you’re measuring, and what distortions come with it.


Attention follows measurement

Sales gets scrutinised because you can measure it. Pipeline, conversion, quota attainment — the numbers are visible and immediate.

Engineering, culture, capability building, long-term investments — these get less attention because the metrics are fuzzier or delayed. Not because they matter less.

This creates a systematic bias. Leadership attention flows to what’s measurable, which means the functions with clear metrics get disproportionate focus. The ones without get neglected — or measured badly, with proxies that create their own distortions.

You need to design against this. Deliberately allocate attention to the unmeasured. Accept that some things matter without proof. Build judgment about what’s working even when the numbers don’t tell you cleanly.


The gap

There’s always a gap between what you can measure and what matters.

Closing it completely is impossible. Pretending it doesn’t exist — or filling it with false precision — makes things worse. The discipline is working honestly within the gap: measuring what you can, being clear about what it tells you, and making space for judgment where measurement falls short.


Related: Number Sense · Four Questions · From Data to Information

#Numbers