Take a handful of plausible predictors, score each on a simple scale, weight them equally, and add them up. This crude method — with no statistical training data, no optimization, no expert judgment — matches or beats optimally weighted regression models in most prediction tasks. The finding disturbed Kahneman when he first encountered it. It should disturb you too.
The Framework
Equal-weighting formulas are prediction models that assign identical weight to each predictor variable, requiring no training data and no statistical optimization. You choose several predictors that plausibly relate to the outcome, score each on a standardized scale (e.g., 1-5), and add the scores. The total is your prediction. In Kahneman's Chapter 21, he reports that equal-weighting formulas perform nearly as well as optimal regression models — and dramatically better than human expert judgment.
The reason is counterintuitive: optimal regression weights are estimated from training data, and in small samples (which most real-world prediction tasks involve), the estimation error in the weights often exceeds the benefit of optimizing them. Equal weights avoid estimation error entirely — they're crude but never wrong in a direction that matters. The formula's advantage over human experts is even larger, because humans are inconsistent (giving different judgments for the same case) while the formula always produces the same output.
Where It Comes From
Kahneman presents equal-weighting formulas in Chapter 21 of Thinking, Fast and Slow as the most surprising implication of Meehl's algorithms-vs-experts research. The concept draws on Robyn Dawes's 1979 paper "The Robust Beauty of Improper Linear Models," which demonstrated that equal-weighted composites of valid predictors consistently performed well across diverse prediction tasks. Even Kahneman admits he was shocked.
> "Multiple regression may in fact be unnecessary for many prediction tasks." — Thinking, Fast and Slow, Ch 21
The Implementation Playbook
Hiring Score: Select 5-6 job-relevant dimensions (technical skill, communication, culture fit, problem-solving, initiative, reliability). Score each 1-5 during a structured interview. Sum the scores. Rank candidates by total. This crude formula will outperform your unstructured gut feeling — because the formula is consistent and the gut isn't.
Investment Screening: Choose 5-6 factors that plausibly predict investment success (market size, team quality, unit economics, competitive moat, growth rate, valuation). Score each 1-5. Sum. The equal-weighted total provides a better ranking than your "overall impression" — and it's reproducible.
Vendor Selection: Define 5-6 evaluation criteria. Score each vendor on each criterion. Sum the scores. The equal-weighted ranking resists the anchoring, halo, and presentation-quality biases that contaminate holistic vendor evaluations.
Personal Decisions: Choosing between apartments, job offers, or even romantic partners? List the dimensions that matter, score each option on each dimension, weight equally, sum. The result won't feel as satisfying as your gut, but it'll be more consistent — and consistency is the factor that determines predictive accuracy.
Key Takeaway
Equal-weighting formulas are the minimum viable prediction model — and they're shockingly effective. They require no training data, no statistical expertise, and no optimization. They beat human judgment because they eliminate the inconsistency that is the primary source of prediction error. The lesson: when facing any multi-dimensional evaluation, create a formula before consulting your gut. The formula doesn't need to be good. It just needs to be consistent.
Continue Exploring
[[Algorithms vs. Experts]] — The broader finding that any formula beats expert judgment
[[Structured Interview Protocol]] — The hiring implementation of equal-weighting principles
[[Apgar Score]] — The most famous real-world equal-weighting formula
📚 From Thinking, Fast and Slow by Daniel Kahneman — Get the book