Unknown and Unknowable
Robert McNamara measured everything. As Secretary of Defense during Vietnam, he brought systems analysis to war — body counts, kill ratios, hamlet evaluations. The numbers went up. The war was lost.
The problem wasn’t bad data. It was what the data didn’t capture. Enemy morale. Political will. The things that actually determined the outcome couldn’t be put on a chart, so they weren’t discussed. The measurable drove out the meaningful.
This is now called the McNamara Fallacy: measure what can be easily measured, disregard what can’t, presume what can’t be measured isn’t important, presume what can’t be measured doesn’t exist.
The people who knew best
Here’s the strange part. The loudest warnings about measurement came from the people who understood it best.
Deming spent his career in statistical process control. He taught Japan how to use data to transform manufacturing quality. And he kept saying: “The most important figures that one needs for management are unknown or unknowable.”
Goldratt trained as a physicist. The Theory of Constraints is rigorous, mathematical. Yet The Haystack Syndrome is largely about how data obscures rather than reveals. He drew a sharp line between data (any string of characters) and information (the answer to a question asked). Most measurement systems produce mountains of the former and almost none of the latter.
Wheeler, the statistician who extended Shewhart’s work on process control, emphasised that numbers without context are meaningless. Is 7.3 high or low? Compared to what? Is the variation signal or noise? The number alone can’t tell you. Judgment can.
These weren’t measurement sceptics. They built the methods. They just understood where the methods stop working.
Why rigour breeds humility
The less you understand statistics, the more you trust numbers. A figure in a spreadsheet feels solid, scientific, objective. The spreadsheet doesn’t lie.
The more you understand statistics, the more you see what’s underneath. Every number is the output of choices — what to count, how to aggregate, what to exclude, where to draw boundaries. Every estimate carries uncertainty that the decimal places hide. The act of measurement itself can change the phenomenon.
That’s why the quants worried. They saw the assumptions. They knew how much judgment was baked into every metric. They watched people treat outputs as facts and worried about what would happen when the assumptions broke.
The thing you can’t measure
Take trust. It matters enormously — whether people believe what you say, whether they’ll raise problems early, whether they’ll go the extra mile when things get hard.
You can survey for trust. But the survey signals distrust. (“Why are they asking? What went wrong?”) You can track proxies — retention, referrals, engagement scores. But proxies can be gamed, and they’re several steps removed from the thing itself.
The thing itself — whether someone believes you — isn’t a number. You can sense it. You can build it or destroy it. You can’t put it on a dashboard.
Morale is the same. Culture is the same. Judgment is the same. Potential is the same. The most consequential factors in any organisation resist quantification. Not because we haven’t found the right metric yet. Because the attempt to measure them changes or destroys them.
The inversion
“What gets measured gets managed” sounds like wisdom. It’s actually a warning.
If you only manage what’s measured, you stop managing what isn’t. The measurable crowds out the meaningful. Attention flows to the numbers, not to where the leverage is. The unmeasured doesn’t disappear — it just stops being discussed.
The statement is also backwards. What gets managed should determine what gets measured, not the reverse. Start with what matters. Ask whether measurement helps. If yes, measure. If no, manage it anyway — through conversation, observation, judgment. Don’t let the metric tail wag the management dog.
How to hold it
The answer isn’t to abandon numbers. It’s proportion.
Use them for what they’re good at. Tracking variation over time. Comparing like with like. Testing whether an intervention worked. Numbers are powerful when answering specific questions within stable systems.
Don’t manufacture precision. “Customer satisfaction is 7.3” sounds more rigorous than “customers seem reasonably happy.” It isn’t. The decimal place is false confidence.
Name the qualitative. If you’re betting on judgment, say so. “We trust this team” is more honest than inventing a score. The decision is qualitative; dressing it up doesn’t improve it.
Ask what’s missing. In every review: what important things aren’t on this dashboard? What would we discuss if we couldn’t see any numbers? That’s often where the real risk lives.
Doing the unscalable
The practical answer is unstructured input. Things that don’t aggregate neatly but reveal what dashboards can’t.
Follow a single thread. One customer complaint, one support ticket, one piece of feedback — trace it all the way back. Not because it’s statistically representative. Because the specific reveals texture the aggregate hides. You learn more from one hour watching a user struggle than from a thousand NPS responses.
Use the meeting margins. The first minute or two before the agenda starts. The temperature check. Energy levels, hesitation, who’s making eye contact, who isn’t. This is where you sense mood before it shows up in attrition numbers six months later.
Watch for silent dissent. People self-censor. They don’t want to be negative, don’t want to slow things down, don’t feel senior enough to push back. The signal is often in what’s not being said. Who hasn’t spoken? Who’s nodding along but looks uncomfortable? Who stopped raising concerns three meetings ago?
Gather from the horse’s mouth. Regular unstructured check-ins with the team. Not status updates — those are data. Actual conversations about what’s hard, what’s frustrating, what’s working. The colour that numbers can’t capture.
None of this scales. That’s the point. Paul Graham’s advice to startups — do things that don’t scale — applies to information gathering too. The unscalable is where you find what the scalable systems miss.
Deming, Goldratt, Wheeler — they built quantitative methods. They spent their later careers warning people not to worship the outputs.
The numbers are tools. They’re not reality.
The test isn’t whether you can put a number on it. It’s whether you’re paying attention to it.
Related: From Data to Information · Accounting for Widgets · Monday Notes
Connects to Library: Goodhart’s Law · McNamara Fallacy · Deming’s 14 Points