Unstable by Design
Some systems are inherently unstable. That’s sometimes a valid choice — but it demands proportionally faster control.
Engineers talk about unstable systems — ones where small errors don’t self-correct but compound. A ball balanced on a hill, not settled in a valley. Left alone, things get worse.
The Sopwith Camel was deliberately designed this way. Its instability made it exceptionally manoeuvrable in dogfights. But it also meant constant correction was required just to fly straight. More Camel pilots died in training accidents than in combat. The aircraft’s instability exceeded what many pilots could handle.
The lesson generalises: you can choose instability for its advantages, but your ability to respond must be faster than your system’s tendency to diverge.
The matching problem
Every unstable system has a characteristic speed — how fast errors compound. Your response loop (sense → decide → act) must operate faster than that speed. If it doesn’t, you’re not controlling the system. You’re watching it oscillate.
This creates a hard constraint: the more unstable your system, the faster your feedback loop must be.
Business instability
Some business models have this same unstable quality:
High leverage. Debt amplifies returns in both directions. Small revenue drops become existential. Miss a payment, creditworthiness drops, financing costs rise, cash tightens further. The loop compounds.
Aggressive growth targets. Growth begets growth expectations. Miss a quarter, stock drops, talent leaves, growth slows further. The system amplifies deviation from plan.
Operational complexity at scale. Interconnected systems where failures cascade. One supplier problem becomes a production problem becomes a customer problem becomes a reputation problem.
Network effect businesses before critical mass. Below the threshold, every churned user makes the product less valuable, which causes more churn. Above the threshold, the loop reverses. But getting there means riding an unstable system.
None of these are inherently wrong choices. But each demands faster response than a stable alternative would.
What faster response looks like
Faster sensing. You can’t respond to what you don’t see. Weekly financials won’t catch a cash crisis that compounds daily. Real-time visibility matters more when the system is unstable.
Faster decision authority. If every response requires escalation through three levels of approval, your effective response speed is limited by your slowest decision-maker. Unstable systems need pre-authorised responses and distributed decision rights.
Faster action. Knowing what to do is worthless if you can’t execute in time. The gap between decision and implementation is part of your loop.
Shorter delays. Every delay — reporting lag, meeting cadence, approval cycles — adds up. An organisation with monthly reviews cannot manage a system that diverges weekly.
The trade-off
Here’s the harder truth: you can’t optimise everywhere.
The resources you spend on fast response are resources you’re not spending elsewhere. An organisation structured for rapid crisis response may be worse at long-term planning. A team that can pivot instantly may lack the stability to execute sustained initiatives.
This isn’t a failure of management. It’s a fundamental trade-off. Choosing instability means accepting worse performance in some dimensions to gain advantages in others.
The real question
Before you design an unstable system — or inherit one — ask: do we have the response capability to match?
If your strategy requires aggressive growth, do you have the information systems, decision rights, and execution speed to correct faster than errors compound?
If you’re adding leverage, do you have the cash visibility and covenant headroom to respond before the feedback loop takes over?
If your operations are tightly coupled, do you have the monitoring and response capabilities to catch cascades early?
The Sopwith Camel wasn’t a bad aircraft. It was a lethal one in the hands of pilots who could match its demands. The ones who couldn’t didn’t survive training.
Connects to Library: Systems Thinking · OODA Loop