Anish Patel

Hidden Priors

Every forecast starts with a belief. The question is whether you admit it.


When someone says “I’m just following the data,” they’re hiding their priors. The data didn’t interpret itself. It was filtered through assumptions about what matters, weighted by intuitions about what’s likely, and fed into a model shaped by beliefs about how the world works. The prior beliefs are doing most of the work. The data confirms or adjusts — it doesn’t create the conclusion from scratch.


The Bayesian reality

Bayesian thinking makes the structure explicit. You start with what you believed before seeing any evidence. Then you update that belief proportionally as evidence arrives. Strong evidence moves you a lot; weak evidence moves you a little; evidence equally consistent with both hypotheses moves you not at all.

This isn’t a technique. It’s a description of how reasoning actually works. Everyone has prior beliefs. The difference is whether you name them.

The forecast that claims objectivity (“the data clearly shows…”) is the one to distrust. The prior is there — it’s just been buried. Maybe it’s an assumption about the reference class. Maybe it’s a model of how customers behave. Maybe it’s a belief about which variables matter. Whatever it is, it’s shaping every conclusion, and you can’t challenge what you can’t see.


Why we hide them

Naming your priors feels like confessing weakness. “I believe this because of my prior assumption that…” sounds less confident than “the analysis shows…” We’re trained to present conclusions as emerging inevitably from the evidence, not as beliefs that have been updated by evidence.

There’s also a political function. Unstated priors are hard to attack. If you don’t say what you assumed, no one can challenge the assumption. The analysis looks rigorous because the scaffolding is invisible.

But hidden priors create hidden risks. If your prior is wrong and no one knows it exists, no one will check it. You’ll be wrong in predictable ways you can’t see coming.


What good forecasters do

Superforecasters — the people who consistently predict better than experts with classified information — operate differently. They make their priors explicit. “My base rate says 15% of projects like this succeed. This one has three factors that argue for higher, and two that argue for lower. I’m updating to 22%.”

The precision isn’t the point. What matters is visibility. Every step is exposed. The base rate is stated. The adjustments are named. Someone disagreeing can say “I accept your base rate but think factor X deserves more weight” rather than “I just have a different feeling about this.”

This is harder. It requires admitting you had assumptions before looking at the data. It requires exposing your reasoning to criticism. It requires being willing to update — really update, not just defensively reinterpret — when evidence contradicts you.


The calibration test

Calibration is the discipline of checking whether your confidence matches reality. When you say you’re 70% confident, are you right 70% of the time?

Most people are overconfident. Their 90% predictions happen 70% of the time. Their 50% predictions happen 40% of the time. They feel certain more often than they should.

The fix isn’t to feel less certain. It’s to track your predictions against outcomes and adjust. This requires stated priors — you can’t calibrate what you never committed to.

Some organisations run prediction markets internally. Others use forecast tracking systems. The mechanism matters less than the discipline: state what you believe, commit to a probability, and check it against reality. Over time, your forecasting improves because you learn where your intuitions mislead you.


The practical test

When you’re in a planning meeting or strategy review, try this. Before the analysis, ask everyone to state their priors. “Before we look at the data, what do you believe? What probability would you assign? What would have to be true for you to update?”

You’ll get resistance. It feels awkward. It exposes disagreement that polite data-driven discussion usually papers over. But it also reveals where people are actually starting from — which matters far more than where the analysis ends up.

The disagreement is usually in the priors, not the data. Once you see that, you can have the real argument.


Related: When Numbers Twitch · Order of Magnitude

Connects to Library: Bayesian Probability · Base Rates

#prediction