Nothing to Update
Every decision contains a prediction. The plan assumes certain conditions will hold. The action assumes a mechanism will work. The investment assumes a response will follow. These predictions exist whether you acknowledge them or not.
Most go unexamined. The strategy implies a view of the market. The hire implies a belief about what capability is missing. The product bet implies a theory of what customers will value. But the predictions stay implicit — embedded in the decision, never stated clearly enough to be tested.
This matters because implicit predictions can’t teach you anything.
The usual framing is that updating beliefs is psychologically hard. Cognitive dissonance, sunk cost, ego protection. No doubt true. But the more mundane problem is that there’s often nothing specific to update. The prediction was never articulated. When reality diverges, there’s no clear moment of contradiction — just a vague sense that things aren’t working.
Explicit predictions are different. “We expect this campaign to generate 200 qualified leads at under £80 each.” Now you have something to check. When results arrive, you’re not asking “did it work?” in the abstract. You’re comparing outcome to expectation. The gap is where learning lives.
The immediate benefit is course correction. If the prediction doesn’t hold, you can adjust. But there’s a deeper benefit that compounds over time.
Each explicit prediction, checked against reality, sharpens your model of how things work. Not just for this decision — for future ones. You start to notice which of your assumptions tend to be right and which tend to be optimistic. You learn where your intuition is calibrated and where it drifts.
This is how good judgement develops. Not through experience alone, but through experience made legible. The leader who’s “seen it before” has an advantage only if they’ve extracted the pattern. Implicit predictions create experience. Explicit predictions create learning.
Organisations have the same property. When predictions stay implied, each decision references the prior one. The chain extends, each link assuming the previous was sound. This works until conditions shift — and then there’s no clear point of failure, just accumulated drift.
When predictions are explicit, the organisation builds a shared world model. Not just individual judgement, but collective calibration. People learn what the company tends to get right and where it tends to miss. The model sharpens across decisions and across people.
There’s another benefit: explicit predictions depersonalise the update. When a prediction lives only in someone’s head, divergence becomes “you were wrong.” Defences rise. The conversation becomes about credibility rather than reality. When the prediction is written down — separate from the person who made it — divergence becomes “that was wrong.” The artifact takes the hit. You can examine it together, ask what was missed, and update without anyone losing face.
The discipline isn’t radical. Before a significant decision, state what you expect to happen. Be specific enough that you’ll know if you were wrong. Write it down — not for bureaucracy, but so you can check later.
When results arrive, compare them to the prediction. Not “did it work?” but “did it work the way we expected, for the reasons we expected?”
The gap between prediction and outcome is information. But only if the prediction was explicit enough to create a gap.
Related: How Decisions Compound · Four Questions · Fresh Eyes
Connects to Library: Bayesian Probability