← Back to Knowledge Graph

The counties with the highest rates of kidney cancer in the United States are mostly rural, sparsely populated, and located in the Midwest, South, and West. What's causing it? Now consider: the counties with the lowest rates of kidney cancer are also mostly rural, sparsely populated, and located in the Midwest, South, and West. Both facts have the same explanation — and it has nothing to do with lifestyle, environment, or genetics.

The Framework

The law of small numbers is System 1's failure to account for sample size when evaluating statistics. Small samples produce extreme outcomes — both high and low — simply because they have fewer observations to average out randomness. A county of 100 people where 3 develop kidney cancer has a rate 30× the national average. A neighboring county of 100 people where 0 develop it has a rate of zero. Neither rate is meaningful — both are artifacts of tiny denominators. But System 1 sees "highest cancer rate" and immediately constructs a causal story (rural diet, industrial pollution, lack of healthcare).

Kahneman's Chapter 10 demonstrates that even professional researchers fall for this: they design underpowered studies (too few participants), over-interpret results from small samples, and dramatically underestimate the role of chance in producing extreme outcomes. The "hot hand" in basketball — the belief that a player who has made several shots in a row is more likely to make the next one — is largely a small-sample illusion: the observed streak lengths are exactly what you'd expect from random sequences.

Where It Comes From

Tversky and Kahneman's 1971 paper "Belief in the Law of Small Numbers" showed that researchers (who should know better) intuitively applied the law of large numbers (large samples converge on true values) to small samples — expecting small samples to be representative of the population. Chapter 10 of Thinking, Fast and Slow extends this to everyday reasoning and organizational decision-making.

> "We pay more attention to the content of messages than to information about their reliability, and as a result end up with a view of the world around us that is simpler and more coherent than the data justify." — Thinking, Fast and Slow, Ch 10

Cross-Library Connections

Hormozi's emphasis in $100M Leads on spending enough to generate statistically meaningful data before drawing conclusions is a small-numbers corrective: a single ad that generates 3 leads from 100 impressions tells you almost nothing, but the entrepreneur's System 1 constructs a confident narrative about the ad's quality.

Dib's marketing metrics guidance in Lean Marketing implicitly addresses small numbers: making strategic decisions based on a week of data rather than a quarter of data is the marketing equivalent of drawing conclusions from the kidney-cancer county map.

The Implementation Playbook

A/B Testing: Never declare a winner based on small samples. A variant that shows 60% conversion after 20 visitors is not meaningfully different from one showing 40% — the difference is well within the range of random noise. Use proper statistical significance calculators and resist the urge to call results early.

Hiring: Don't generalize from one or two employees. "The last two developers we hired from that university were great, so we should recruit more from there" is small-number reasoning. Two is not a pattern — it's a coincidence.

Performance Evaluation: A salesperson who had a great month may have been lucky, not skilled. A salesperson who had a terrible month may have been unlucky, not incompetent. Evaluate performance over quarters and years, not weeks and months. The smaller the sample, the more extreme and unreliable the result.

Market Research: Customer surveys with 20-50 responses produce extreme results that feel meaningful but aren't. "73% of customers said they want feature X" sounds definitive when n=30 — but the confidence interval is enormous. Invest in larger samples or acknowledge the uncertainty.

Investment: The most dangerous small-number illusion in investing is the "hot fund manager" — a manager who outperformed for 2-3 years, attracting billions in new investment, only to regress to the mean. Two good years in a row proves nothing about skill; four to five consecutive years of outperformance starts to be meaningful.

Key Takeaway

The law of small numbers means that extreme outcomes are usually noise, not signal — and the smaller the sample, the more extreme and meaningless the outcomes. System 1 sees every extreme result as meaningful because it can't help constructing a causal story. The correction is always the same: ask "how big is the sample?" before asking "what does this mean?"

Continue Exploring

[[Regression to the Mean]] — The companion principle: extreme outcomes revert because they contain luck

[[WYSIATI]] — The mechanism by which extreme results become confident narratives

[[Narrative Fallacy]] — System 1 constructs causal stories from random patterns


📚 From Thinking, Fast and Slow by Daniel Kahneman — Get the book