Anish Patel

How Decisions Compound

The organisations that win aren’t the ones that make perfect decisions. They’re the ones whose decisions compound into learning.


The invisible system

Every organisation makes decisions. Few build systems that make those decisions compound.

The difference is stark. In one company, the same debates recur quarter after quarter. Someone proposes a pricing change, the room argues, a decision gets made, and six months later the same arguments resurface as if nothing had been learned. In another company, each decision leaves a residue — updated thresholds, sharper mental models, clearer criteria for next time.

Both companies employ smart people. Both have good intentions. The gap is architectural. One has a learning loop. The other has a sequence of isolated judgement calls.


Four questions, continuously

The loop rests on four questions that keep recurring:

What matters here? — forming a belief about what will create value

What’s really happening? — seeing reality clearly enough to test that belief

What are we going to change? — acting in ways that could prove you wrong

What do we expect to happen next? — making prediction explicit so you can learn

These aren’t phases to complete. They’re questions to keep asking. The value comes from treating them as a continuous cycle rather than a linear process.


The first tight loop: belief meets reality

Strategy and numbers form a tight loop. You state a belief; you look at reality. The numbers either confirm the belief or challenge it.

Most companies skip this. They produce strategies that sound decisive — “we will compete on customer intimacy” — but never specify what would prove them wrong. They build dashboards that track everything but connect to nothing. Data piles up without becoming information.

The discipline is to make the strategy testable. A strategy is a belief about cause and effect: if we do this, we expect that. Framed this way, it becomes a hypothesis. Real strategic choices have rational opposites. If the opposite sounds absurd, you haven’t chosen anything.

Then comes measurement. Not “what data should we collect?” but “what decision are we trying to make?” Data doesn’t become information until it passes through a decision process. Start with the choice, work backwards to the evidence.

This loop runs fast — in a meeting, in a review, in an afternoon spent with the numbers. It’s the loop that keeps strategy honest.


The second tight loop: action meets feedback

Action and prediction form another tight loop. You act; you observe what happens; you compare it to what you expected.

The failure mode here is familiar: teams execute without stating what they expect to see. A marketing campaign launches. Three months later, leads are up. Was it the campaign or the market? Nobody knows, because nobody recorded what they predicted.

The discipline is to make predictions explicit before you act. Not “we hope this works” but “we expect this campaign to generate 200 qualified leads in the first month, with cost per lead under £80.” Now you have something to compare.

When outcomes diverge from prediction, the loop closes. Either you adjust the action or you update the belief that drove it. The gap between prediction and result is where learning lives — but only if the prediction was recorded.


How wobbles become updates

Both loops require discipline about which signals actually matter.

Consider two managers who see the same churn spike — 4% jumping to 6%.

The first slams on the brakes. She freezes hiring, commissions a redesign, calls an emergency review. A month later churn slides back to normal. The spike was a client merger, a one-off blip. The fixes cost more than the problem.

The second pauses. She checks whether the data source is reliable, whether this is a pattern or a one-off, whether timing fits seasonal behaviour. With no firm trend, she nudges her concern down, runs a couple of retention plays, and stays on course. Six weeks later churn stabilises — and so does her plan.

Same data, different disciplines. One treated every wobble as signal. The other weighted the evidence before deciding how much to update. A single data point moves you a notch. Multiple consistent signals justify a bigger shift.


The slower loop

Behind both tight loops sits a slower one: how the organisation itself gets smarter.

Over time, the accumulation of action and review changes what you believe matters in the first place. The strategy itself evolves. What seemed like noise becomes signal. What seemed fixed becomes variable.

Each decision that goes through the loop leaves something behind: an updated threshold, a refined rubric, a sharper mental model. Codify your decision logic, and the next decision happens faster. The person who made the call doesn’t need to be in the room.

This is also why preserving knowhow matters. When someone leaves, what’s at risk isn’t just their output — it’s the accumulated learning embedded in how they made decisions. The slow loop only compounds if the learning stays in the system.


What gets in the way

If the loop is so powerful, why don’t more organisations run it?

Decision latency. The constraint isn’t execution speed — it’s how long decisions take. Weeks disappear waiting for someone, somewhere, to say yes. The tight loops can’t run if the decisions don’t get made.

Reaction theatre. When pressure builds, teams abandon the loop. They react to every wobble instead of weighting evidence. They start new initiatives before finishing the old ones. They mistake motion for progress.

Strategy that isn’t testable. If you can’t specify what would prove your strategy wrong, you can’t run the first loop. Most strategy documents fail this test. They state intentions (“we will be customer-centric”) rather than hypotheses (“if we reduce time-to-resolution by 30%, retention will improve by 5 points”).

Measurement without decision design. Dashboards that track everything but connect to nothing. The data exists; the decision process doesn’t.

There’s a failure mode in the other direction too. Loops can become bureaucracy — review meetings that exist for their own sake, prediction templates that nobody reads, learning processes that generate documentation but not insight. The loop should accelerate decisions, not add friction. If it’s slowing things down, you’ve over-systematised.


The compound effect

Decisions compound when each one leaves the next one better informed.

The organisation that runs the loop learns what customers actually value, not what they say they value. It learns which leading indicators actually lead, not which ones the industry tracks. It learns what its own capabilities actually are, not what the org chart implies.

This learning compounds. Each cycle sharpens the model. Each sharpened model produces better hypotheses. Each better hypothesis generates more useful feedback.

The organisations that win aren’t the ones with the best initial strategy. They’re the ones whose strategy improves fastest. Build the loop, protect it from both neglect and over-engineering, and the decisions take care of themselves.


This essay synthesises ideas from:

Connects to Library: Systems Thinking · OODA Loop

See also: Reading Guide for the complete collection of Field Notes.

#foundations #synthesis