Anish Patel

Small Samples

At n=20, the qualitative signal is often more reliable than the quantitative noise.


A product team runs user interviews. Twenty people. Twelve prefer option A, eight prefer option B. The readout: “60% preferred A.”

That number looks precise. It isn’t. With twenty people, that 60/40 split could easily be 50/50 with a different sample. The confidence interval is enormous. You’ve learned almost nothing quantitatively — but the number creates false confidence because it feels objective.

This happens constantly. Customer surveys, usability tests, pilot programmes, pricing research. Teams gather small samples, convert responses to percentages, and treat the output as data. “73% said they’d pay more.” “65% found the new flow easier.” The percentages suggest precision that doesn’t exist.


The rule of thumb

To detect a meaningful difference, you need more samples than most product research provides.

For conversion rate changes: To reliably detect a 10% relative improvement (say, 5% → 5.5%), you need roughly 3,000-4,000 observations per variant. For a 20% improvement, you need about 1,000. For 50%, around 150.

The maths is unforgiving. Small effects require large samples. Most A/B tests, surveys, and user studies don’t come close.

Jason Cohen puts it bluntly: if you’re trying to measure a 10% difference with a small sample, “you literally can’t measure it.” The test will either never reach significance or — worse — report a false positive that you’ll trust because it confirms what you wanted to believe.


What survives small samples

But while the quantitative signal drowns in noise at small n, the qualitative signal often survives.

Twenty interviews won’t tell you what percentage of users prefer option A. But they might tell you:

The words people use. When someone says “this feels like it was designed by engineers” or “I’d use this every day,” that language carries information regardless of sample size.

The pattern of reasoning. If twelve people independently mention the same friction point, the convergence matters even if you can’t extrapolate to a population percentage.

The intensity of reaction. Lukewarm preference from 60% tells you less than passionate enthusiasm from 30%. Small samples can reveal intensity; they can’t reliably measure share.

The surprises. Unexpected responses — objections you didn’t anticipate, use cases you didn’t imagine — are just as informative at n=5 as n=500.

The mistake is treating small-sample research as quantitative when its value is qualitative. You’re not measuring; you’re learning.


The practice

Before gathering data, know what sample size you’d need. If you want to measure a 15% preference difference with confidence, you need roughly 350 responses per option. If you’re running a study with 30 people, accept that you’re doing qualitative research dressed in quantitative clothing.

Report uncertainty, not false precision. “Between 8 and 16 of our 20 users preferred A” is more honest than “60% preferred A.” The range reminds everyone — including you — what you actually learned.

Trust the words over the numbers. At small n, the specific feedback is the data. The percentages are decoration. If you find yourself debating whether 60% vs 55% is meaningful with n=20, you’ve lost the plot.

Use small samples to generate hypotheses, not test them. Twenty interviews might surface a pattern worth investigating. They can’t confirm the pattern exists at scale. That’s a different study, requiring different sample sizes.

Know when quant is impossible. Some decisions can’t wait for statistical significance. You’ll never get 3,000 enterprise buyers into a pricing study. In those contexts, stop pretending you’re doing quantitative research. Do qualitative research well instead — and make the decision with appropriate humility about what you don’t know.


The appeal of numbers is that they feel objective. But objectivity requires sample sizes that most product research never achieves. At n=20, you’re not measuring. You’re listening. Listen well, and stop pretending the percentages mean anything.


Connects to Library: Base Rates · Bayesian Probability

#numbers #action