Anish Patel

ROI and the Cost of Delay

ROI calculations present false precision. The assumptions stack, the errors correlate, and timing kills more projects than economics.


The formula

ROI = (Net return − Investment) ÷ Investment

Simple enough. A £1m investment returning £1.5m has 50% ROI. The variants — NPV, IRR, payback period — dress it up with discount rates and time value, but the core question is the same: does the return justify the investment?

The problem isn’t the formula. It’s the inputs.


An example

A product team proposes a new line:

AssumptionValue
Investment£1m
Annual return once live£500k
Development time12 months
Time to full revenueImmediate at launch

The business case over four years:

Net return: £1m. ROI: 100%.

The project gets approved.


What actually happens

Three assumptions slip. Each is modest. Each gets explained away in status updates.

Development takes 18 months, not 12. The team got the product right. Quality matters. Fifty percent over, but defensible.

Production ramp takes 12 months. New manufacturing lines always need tuning. Normal for this type of product.

Sales cycle adds 6 months. Enterprise customers take time. Pipeline is building. Patience required.

The revised timeline:

PeriodStatusRevenue
Months 0-18Development£0
Months 18-30Production ramping£100k
Months 30-42Sales converting£300k
Months 42-48Full run rate£250k

Four-year total: £650k on £1m invested. ROI: -35%.

Same investment. Same eventual unit economics. Timing turned 100% ROI into a loss.


Why the errors correlate

The business case treated each assumption independently. Reality doesn’t work that way.

When development slips, ramp usually slips too — the team that was supposed to prepare manufacturing was waiting for final specs. When ramp slips, sales has less runway to build pipeline. When sales cycle extends, revenue lands later in the evaluation window.

The errors compound in the same direction. Projects rarely beat optimistic schedules and exceed revenue forecasts. The distribution has a long tail on the downside, not the upside.

This is Don Reinertsen’s insight on the cost of delay: delay isn’t linear. A six-month delay doesn’t cost half of a twelve-month delay. It pushes everything back — production prep, sales hiring, market timing. Each month of delay costs more than the last because the dependencies cascade.


What ROI hides

Single-point estimates hide distributions. A 75% ROI might mean “almost certainly between 60% and 90%” or “could be anywhere from -20% to 150%.” The number doesn’t tell you which. Most business cases don’t model the variance.

Timing assumptions get less scrutiny than revenue assumptions. Every business case stress-tests the revenue forecast. Few stress-test the development timeline with the same rigour, even though timeline slippage is more common than revenue miss.

Sunk cost psychology kicks in. Once a project is underway, delays get absorbed rather than triggering re-evaluation. “We’ve already spent £600k, we can’t stop now.” The ROI calculation that justified the project isn’t updated when assumptions break.

Base cases become anchors. The original ROI becomes the reference point. A project delivering 20% ROI feels like failure if it was approved at 100%, even though 20% might be acceptable on its own terms. The anchoring distorts decision-making.


Where it breaks down

Short evaluation windows. ROI over three or four years penalises projects with long development cycles, even if lifetime economics are strong. A project that loses money in year four but generates returns for a decade might be the right investment — but it won’t survive a four-year ROI filter.

Discount rate sensitivity. NPV calculations are highly sensitive to discount rate assumptions. A project that looks attractive at 8% might look marginal at 12%. The “right” discount rate is itself an assumption with significant uncertainty.

Comparison across project types. ROI treats a low-risk 20% return the same as a high-risk 20% return. A sure thing and a lottery ticket show the same number. Risk-adjusted returns are harder to calculate but more meaningful.

Incremental versus transformational. ROI favours incremental projects with predictable returns over transformational bets with uncertain but potentially larger payoffs. The metric biases toward small, safe investments.


The decision it enables

ROI should inform decisions, not make them. The number matters less than understanding what drives it.

Stress-test timing explicitly. Run the model with development at 1.5x plan. Run it with sales cycle at 1.5x assumption. See what breaks. If modest timing slippage kills the ROI, the project is fragile.

Model scenarios, not points. Base, bear, and bull cases on both economics and timing. What’s the ROI if everything goes right? What if two assumptions slip by 30%? The range tells you more than the midpoint.

Watch for correlated assumptions. If the project requires development on time AND ramp on schedule AND sales on plan to work, the probability of hitting all three is lower than hitting any one. Independence is usually an illusion.

Update as you learn. The ROI that justified approval isn’t sacred. When assumptions break, recalculate. Sometimes the right answer is to stop.

The discipline isn’t hitting the original ROI. It’s knowing whether the investment still makes sense given what you’ve learned.


The metric series: Part of a series on metrics that reveal what headline numbers hide.

#numbers #metrics