Anish Patel

Four Questions

Pilots have checklists. Surgeons have checklists. These exist because high-stakes decisions deserve consistent discipline — a way to catch what you might otherwise miss under pressure.

Managers make consequential decisions constantly. Decisions that affect customers, employees, livelihoods. Yet most work without any equivalent discipline. Instinct, experience, pattern-matching — all valuable, but unsystematic.

Four questions matter for any significant decision. Not as a rigid formula, but as a discipline for thinking clearly when the stakes are real.


What should we do?

This is strategy. Not vision statements or aspirations, but the actual choice: what are we going to do, and what are we choosing not to do?

Good answers are specific enough to guide action. “Grow revenue” isn’t a strategy. “Win the mid-market segment by being the easiest to implement” is closer — it tells you what to prioritise and what to deprioritise.

Teams that can’t clearly articulate what they’re doing (and what they’re deliberately not doing) end up spreading effort across too many fronts. Everything becomes a priority, which means nothing is.


Can we prove it?

This is where numbers come in — not as an afterthought, but as a test of the strategic choice.

The question isn’t “what metrics do we have?” It’s “do we have the evidence that gives us confidence we’ve made the right call?” Sometimes that means building new instrumentation. Sometimes it means acknowledging you’re working on belief, not proof, and being honest about that.

The discipline is separating signal from noise, understanding what’s material, and being clear about confidence levels. A strategy built on shaky data is a bet. That’s fine — most strategies are bets — but you should know when you’re betting.

The numbers might confirm the strategy, or they might suggest it’s wrong. Either way, you need to look.


How do we get it done?

This is action — not urgency or busyness, but the system design for follow-through.

A strategy without an action plan is just an idea. The questions here are practical: Who owns what? What’s the sequence? What mechanisms keep things moving — the cadences, the forums, the feedback loops? And critically: is there the capability and capacity to actually do this?

Plans that look good on paper often ignore reality. Teams already overloaded. Dependencies unacknowledged. Timelines that assume everything goes perfectly.

Good action planning is honest about constraints. Given what we actually have, what can we realistically achieve? And what mechanisms will catch problems early, not after months of drift?


Will it work?

This is prediction — and it’s the question that gets skipped most often.

Before executing, you should be able to articulate what you expect to happen. Not as a forecast to be held to, but as a testable hypothesis. If this works, what should we see? In what timeframe? What leading indicators would tell us we’re on track — or off it?

This is where you stress-test the plan. What could go wrong? What scenarios haven’t been considered? What would have to be true for this to fail?

The discipline isn’t about being right. It’s about creating conditions to learn. Without expectations articulated upfront, you can’t tell later whether you succeeded, failed, or just got lucky. Prediction turns action into a learning loop.


The questions scale

These four questions work at different altitudes. A small operational decision still benefits from asking “what are we trying to do, how confident are we, how will we execute, and what do we expect?” — just more quickly, with lighter-weight answers.

A major strategic bet — an acquisition, a market entry, a transformation — needs the same four questions, but with more underneath each one. Deeper analysis, more rigorous testing, more explicit scenarios.

The questions don’t change. The weight behind them does.


Related: Applied Scientific Thinking · Decision Architecture · Number Sense

#strategy #numbers #action #prediction