Anish Patel

Learning Under Uncertainty

Experience doesn’t automatically become learning. Learning requires a system.


The learning gap

Every organisation accumulates experience. Few convert it into learning.

The difference is visible in how they handle being wrong. In one company, a missed forecast triggers post-mortems, updated models, and sharper predictions next time. In another, the same miss gets explained away — unusual circumstances, external factors, bad luck — and the forecasting process continues unchanged. Both companies have smart people. Both have years of experience. One is learning. The other is just getting older.

The gap matters because the world keeps changing. Markets shift, competitors move, customers evolve. The organisations that survive aren’t the ones that predicted the future correctly. They’re the ones that updated fastest when their predictions failed.


Why experience isn’t enough

Experience creates familiarity. Learning creates improvement.

A leader with twenty years in an industry has seen a lot. But if those twenty years consisted of implicit predictions that were never checked against outcomes, the experience hasn’t sharpened judgement — it’s just confirmed whatever biases were already there. The pattern-matching feels confident. The confidence is unfounded.

This is the core problem: implicit predictions can’t teach you anything. When the strategy implies a view of the market but never states it explicitly, there’s no moment of contradiction when reality diverges. When the hire implies a belief about what capability is missing but the belief stays unspoken, there’s no way to check whether you were right. The predictions exist — every decision contains one — but they’re embedded in the action rather than exposed to testing.

Explicit predictions are different. “We expect this campaign to generate 200 qualified leads at under £80 each.” Now you have something to check. When results arrive, you’re not asking “did it work?” in the abstract. You’re comparing outcome to expectation. The gap is where learning lives.


The machinery of updating

Learning under uncertainty has a structure. It’s not mysterious, but it is disciplined.

Start with a prior. Every forecast begins with a belief. The question is whether you admit it. When someone says “I’m just following the data,” they’re hiding their prior — the assumptions about what matters, the intuitions about what’s likely, the model of how the world works. The prior is doing most of the work. The data confirms or adjusts; it doesn’t create the conclusion from scratch.

Good forecasters make priors explicit. “My base rate says 15% of projects like this succeed. This one has three factors that argue for higher, two that argue for lower. I’m updating to 22%.” The precision isn’t the point. Visibility is. Every step is exposed. Someone disagreeing can say “I accept your base rate but think factor X deserves more weight” rather than “I just have a different feeling about this.”

Update proportionally. Strong evidence should move you a lot, weak evidence a little, and evidence equally consistent with both hypotheses should move you not at all. This sounds obvious, but most organisations violate it constantly — overreacting to single data points, underreacting to consistent patterns, treating all information as equally significant.

The discipline is weighting evidence before deciding how much to update. A single churn spike is noise until proven otherwise. Three consecutive months of rising churn is signal. The wobble doesn’t justify slamming on the brakes. The trend does.

Know what would change your mind. Before you commit to a position, specify what would falsify it. “If we don’t see X by month six, we’ll revisit the approach.” This creates a decision point — a moment where you can distinguish inverse response from genuine failure, patient conviction from stubborn denial.

Without this, you’re asking for indefinite faith. With it, you’ve built a tripwire that forces honest reckoning.


Why we stay wrong

Being wrong is inevitable. Staying wrong is a choice.

Cognitive dissonance is the mechanism. When evidence contradicts our beliefs, we experience discomfort — and we’re wired to resolve it the easiest way possible. Changing beliefs is hard. Reinterpreting evidence is easier.

So we explain away — “that was an edge case,” “the data wasn’t representative,” “external factors intervened.” Each individual explanation might be valid. But over time, a pattern emerges: contradicting evidence gets neutralised, confirming evidence gets amplified, and the belief never updates.

Smart people are often better at staying wrong — they’re better at generating sophisticated reasons why the contradicting evidence doesn’t count.

The fix is structural. Aviation figured this out: when a plane crashes, investigators recover the black box and reconstruct what happened. Findings are shared globally. Every airline learns from every failure. The result is extraordinary safety despite extraordinary complexity.

Most business failures are ambiguous setbacks. You can tell yourself a story that protects your model of the world. “We were right, but…” is the signature phrase of staying wrong. The ambiguity allows reinterpretation. Each failure becomes a one-off, a special case, a victim of circumstances. The system never updates.

The design principle: create red flags. Make failure stark enough that reinterpretation becomes difficult. Track predictions against outcomes. Write down what you expect to happen, and when. Check later whether it did. The prediction log creates a record you can’t argue with.


The consensus trap

Universal agreement should make you nervous.

When everyone in the room thinks the same thing, either you’ve found the obvious right answer — in which case there’s no advantage, because everyone else sees it too — or you’ve found a collective blind spot.

Consensus forms through predictable mechanisms. Recent experience gets overgeneralised. Narrative coherence makes stories that explain everything feel trustworthy. Social proof cascades until everyone believes something because everyone believes it. Nuance collapses in transmission until caveats evaporate and confidence calcifies.

The practical check: when you find yourself agreeing with everyone, ask what would make this wrong. If no one can articulate a realistic failure mode, you’ve found a blind spot. Ask who disagrees and why. Somewhere, someone holds the opposite view. If you can’t articulate their argument, you don’t understand your own position well enough.

Being contrarian isn’t the goal. Being right when consensus is wrong is. That requires understanding something the consensus has missed, not just preferring to feel clever.


The confidence problem

Learning cultures require something most organisations punish: admitting uncertainty.

When a leader says “I’m 60% confident this will work,” stakeholders hear weakness. Boards want certainty. Investors want conviction. The reward structure favours appearing confident over being calibrated. So leaders learn to hide their uncertainty, state predictions with false precision, and quietly explain away the misses later.

This creates a trap. The organisation can’t learn because the predictions were never honest. The predictions weren’t honest because the culture punishes uncertainty. The culture punishes uncertainty because no one has modelled what calibrated confidence looks like.

Breaking the trap requires top-cover. Someone senior has to demonstrate that admitting uncertainty is strength, not weakness. “I think there’s a 70% chance this works. Here’s what would move me to 90%, and here’s what would drop me to 40%.” When leadership models this, it becomes possible for everyone else.

The alternative is an organisation that looks confident and learns nothing.


Building the system

Learning under uncertainty isn’t a mindset. It’s a practice that requires specific mechanisms.

Prediction logs. Not every prediction — that’s unsustainable. But the ones that matter: forecasts you’re acting on, beliefs you’d be uncomfortable updating, predictions where you have genuine confidence. Write them down, check them later. Over time, you learn where your intuitions mislead you.

Calibration tracking. When you say you’re 70% confident, are you right 70% of the time? Most people are overconfident. Their 90% predictions happen 70% of the time. The log shows you where. Some organisations run prediction markets internally. The mechanism matters less than the discipline.

Pre-mortems. Before committing to a plan, ask: “It’s a year from now and this failed. What happened?” This surfaces risks that optimism obscures. It’s easier to imagine failure in hindsight than to admit uncertainty in the present.

Explicit decision criteria. When decisions repeat, codify the logic. Not to remove judgement, but to expose the model. Then you can check whether the model is working, update it when it isn’t, and make decisions without the original decision-maker in the room.

Psychological safety for being wrong. None of this works if people can’t admit mistakes. The question is simple: when was the last time someone in your organisation was rewarded for admitting they were significantly wrong? If you can’t remember, the system is broken. Data won’t flow to places that punish honesty.


Getting started

Start with yourself. Pick three predictions you’re making this quarter — forecasts you’re acting on, beliefs you’re confident about. Write them down with enough specificity that you’ll know if you were wrong. Check them in ninety days.

That’s it. One person, three predictions, one quarter.

If you’re right 100% of the time, you weren’t predicting anything uncertain. If you’re wrong and surprised, you’ve learned something about your calibration. Either way, you’ve started building the muscle.

Once you’ve done it yourself, you can ask your team to do it. Model the update in public: “I predicted X, I got Y, here’s what I think I missed.” That’s how learning cultures start — not with systems, but with someone senior demonstrating that being wrong is survivable.


The compound effect

Learning compounds in the same way decisions do.

Each prediction, checked against reality, sharpens your model. Each sharpened model produces better hypotheses. Each better hypothesis generates more useful feedback. The organisation that runs this loop learns what actually predicts success, not what the industry assumes. It learns which of its intuitions are calibrated and which drift systematically. It learns faster than competitors who are merely accumulating experience.

This is the real competitive advantage. Not the perfect initial model — no one has that. The fastest rate of improvement. The willingness to be wrong and the discipline to notice.

The organisations that thrive under uncertainty aren’t the ones that predicted the future correctly. They’re the ones that built systems to learn from being wrong — and then actually used them.

Start with three predictions. Check them in ninety days. See what you learn.


This essay synthesises ideas from:

Connects to Library: Bayesian Probability · Black Box Thinking · OODA Loop

See also: Reading Guide for the complete collection of Field Notes.

#prediction #synthesis