Confidence
A number without a confidence level is a guess dressed as a fact.
The illusion of precision
A business case shows 127% ROI. Not “roughly 100%” or “between 80% and 150%” — exactly 127%.
The precision is false. The model has five assumptions, each estimated within ±20%. The output inherits uncertainty from all of them. That 127% might really be anywhere from 60% to 200%, or worse.
But the spreadsheet shows 127%, so that’s what gets discussed. The confidence interval — how wide the range of plausible outcomes actually is — disappears.
Point estimates hide distributions
Every forecast is a distribution, not a point.
“Revenue will be £10m” really means “revenue will most likely be somewhere between £8m and £12m, could be as low as £6m or as high as £15m, with £10m being our best guess.”
Collapsing that distribution to a single number loses information:
It hides the downside. If £8m is the realistic floor and £6m is the disaster scenario, decision-makers should know that. The point estimate of £10m obscures the risk.
It hides the upside. If £15m is achievable with good execution, that option value matters. The point estimate doesn’t capture it.
It treats all £10m forecasts equally. A “tight” £10m (almost certainly between £9m and £11m) is fundamentally different from a “wide” £10m (could easily be £5m or £20m). Same point estimate, completely different risk profiles.
How errors compound
Business cases stack assumptions. Each assumption has uncertainty. The uncertainties multiply.
Example: A product launch depends on:
- Development time (estimated 12 months, could be 9-18)
- Production ramp (estimated 6 months, could be 4-12)
- Sales conversion (estimated 40%, could be 25-55%)
- Price realisation (estimated £100, could be £80-120)
If each assumption has ±30% uncertainty and they’re independent, the combined uncertainty isn’t ±30% — it’s much wider. And if the assumptions are correlated (development delays cause ramp delays cause sales delays), the uncertainty is wider still.
The model shows a single ROI. Reality has a distribution of outcomes, and the tails are fatter than the point estimate suggests.
This is why projects that look attractive on paper so often disappoint. The point estimates were the best case in each dimension. The probability of hitting every best case simultaneously is near zero. See: ROI and the Cost of Delay.
Sensitivity: what breaks the model
Sensitivity analysis asks: which assumptions matter most?
Some inputs barely affect the output. If production cost varies by ±20% and only changes ROI by ±3%, don’t spend time refining that estimate.
Some inputs dominate. If development time varies by ±50% and swings ROI from +80% to -40%, that assumption deserves scrutiny. Get better information, plan for contingencies, or accept you’re making a bet.
The discipline is knowing which assumptions are load-bearing before you commit. A model that’s only viable if three assumptions hit their optimistic end is fragile. A model that works even with two assumptions at their pessimistic end is robust.
Expressing uncertainty
Ranges, not points. “Revenue between £8m and £12m” is more honest than “Revenue: £10m”. It reminds everyone — including you — that you’re estimating, not predicting.
Confidence levels. “80% confident revenue will be between £7m and £13m” is more informative still. It makes explicit how often you expect to be wrong.
Scenarios, not sensitivities. Instead of varying one assumption at a time, model coherent scenarios: base case, optimistic case, pessimistic case. Assumptions often move together — a good scenario has strong demand and fast ramp and high conversion.
What would change your mind? State the assumptions that would invalidate the model. If customer acquisition cost turns out to be 50% higher than assumed, the project is underwater. Making that explicit upfront is more useful than pretending the estimate is reliable.
The practice
Ask for ranges, not points. When someone gives you a forecast, ask: “What’s the range? What would make it higher or lower?” If they can’t answer, the estimate is less informed than it appears.
Identify the key sensitivities. For any model, ask: which two or three assumptions most affect the output? Focus diligence there. The rest is noise.
Stress-test the fragile cases. If the model requires multiple assumptions to hit their targets simultaneously, ask: what’s the probability of that? What happens if one or two miss?
Update as you learn. The estimate that justified the investment isn’t sacred. As reality unfolds, narrow or widen your confidence intervals. Sometimes the right response is to stop.
Confidence isn’t about certainty. It’s about knowing how certain you are — and making decisions that are robust to being wrong.
Connects to Library: Bayesian Probability · Lognormal Distribution