← Back to Knowledge Graph

A flight instructor noticed that cadets who were praised for a smooth landing performed worse next time, while those who were screamed at for a rough landing improved. Conclusion: punishment works better than praise. The actual explanation: regression to the mean. The instructor was rewarding randomness.

The Framework

Regression to the mean is the statistical reality that extreme performances — good or bad — tend to be followed by performances closer to the average. This happens not because of any causal mechanism (praise doesn't hurt; criticism doesn't help) but because extreme performances contain a large component of luck, and luck doesn't persist. A golfer who shoots 8-under on Thursday will almost certainly shoot closer to their average on Friday — not because success breeds complacency, but because the luck that contributed to 8-under won't repeat.

The danger is that System 1 cannot leave regression unexplained. It demands a causal story. The flight instructor sees improvement after criticism and sees decline after praise, and constructs a causal theory: criticism works, praise backfires. The theory is completely wrong — the changes would have happened regardless of the instructor's response — but it feels absolutely right. Kahneman calls this "one of the most significant and best-documented" cognitive errors in human reasoning.

Where It Comes From

Francis Galton discovered regression to the mean in the 1880s while studying the heights of parents and children. Kahneman devotes Chapter 17 of Thinking, Fast and Slow to it, calling it "one of the most remarkable and satisfying achievements" of statistics. The flight instructor story is his most memorable illustration: it demonstrates how regression creates an illusion of effective punishment and ineffective reward, producing a systematic bias toward punitive management styles. The "Sports Illustrated jinx" (athletes who appear on the cover subsequently perform worse) is the same phenomenon — they appeared because of an extreme performance, and regression brought them back to average.

> "I had stumbled onto a significant fact of the human condition: the quality of feedback we receive from life is perversely related to our responses." — Thinking, Fast and Slow, Ch 17

Cross-Library Connections

Hormozi's approach in $100M Leads to evaluating advertising performance requires regression awareness: a campaign that performs exceptionally well in its first week will almost certainly perform less well subsequently. Scaling based on peak performance without accounting for regression leads to overspending. Hormozi's emphasis on tracking over longer periods ("30 days minimum") is a practical regression-to-the-mean correction.

Wickman's quarterly Rock review system in The EOS Life builds in regression awareness: by evaluating performance over 90-day cycles rather than reacting to individual data points, the system smooths out the regression noise and reveals true trends.

The Implementation Playbook

Management: Praise and criticism should be based on the process, not the outcome. An employee who follows good procedures and gets a bad result doesn't need criticism — they need recognition that the process was sound and the outcome was regression-bound noise. An employee who follows bad procedures and gets a good result needs coaching, not celebration. Rewarding outcomes trains people to attribute luck to skill.

Investment: A fund that outperformed last year will, on average, regress toward the mean next year. Chasing last year's winner is chasing regression noise. The correct strategy: evaluate the fund's process (strategy, risk management, fee structure) rather than its recent performance (which contains both skill and luck).

Product and Marketing Testing: An A/B test variant that dramatically outperforms in week one should not be immediately scaled to 100% — the result likely contains regression-bound luck. Run tests for multiple periods, track performance over time, and scale based on consistent performance, not peak performance.

Coaching and Teaching: Understand that students who perform exceptionally well on one test will likely perform closer to their average on the next — and vice versa. The temptation is to attribute the decline to your teaching and the improvement to your criticism. Both attributions are wrong. Teach to the process; accept that outcomes will fluctuate.

Self-Evaluation: After an extraordinarily successful month, expect a less successful one. After a terrible quarter, expect improvement. This isn't pessimism or optimism — it's statistics. The practical implication: don't radically change your strategy based on one extreme data point in either direction. Wait for multiple data points before concluding that something has fundamentally changed.

Key Takeaway

Regression to the mean is invisible because System 1 insists on explaining every pattern causally. The flight instructor, the sports fan, and the portfolio manager all see the same thing — extreme performance followed by average performance — and all construct the wrong story. The only protection is statistical literacy: the explicit knowledge that extreme performance contains luck, luck doesn't persist, and the inevitable return to average is not caused by anything you or anyone else did. Regression is the most boring explanation for any change in performance — and it's almost always the correct one.

Continue Exploring

[[Planning Fallacy]] — Another case where base rates (outside view) beat stories (inside view)

[[Narrative Fallacy]] — System 1's compulsion to explain regression as causation

[[Four-Step Regression Correction]] — Kahneman's method for making predictions that account for regression


📚 From Thinking, Fast and Slow by Daniel Kahneman — Get the book