← Back to Knowledge Graph

Robyn Dawes showed that a formula using equal weights for all predictors — no regression, no optimization, no training data — often performs as well as optimally weighted regression models. You don't need to know which variables matter most. You just need to know which variables matter at all.

The Framework

Equal-weighting formulas assign identical importance to every predictor variable. Instead of running a regression to find optimal weights (which requires large training samples and overfits in small samples), you simply standardize each variable and average them. The prediction is: z₁ + z₂ + z₃ + ... + zₙ. Dawes proved that this crude approach matches or approaches optimal regression in most real-world prediction tasks — because the gains from optimal weighting are typically small and fragile, while the gains from consistency (always using the same formula) are large and robust.

The implication is startling: you don't need sophisticated statistics to beat expert judgment. You need a list of relevant variables, a way to measure each, and the discipline to use the formula consistently.

Where It Comes From

Kahneman presents Dawes's equal-weighting finding in Chapter 21 of Thinking, Fast and Slow as the most extreme version of the Meehl principle. If even crude formulas beat experts, the problem isn't that experts lack the right formula — it's that experts lack consistency. A formula, even a bad one, applies the same weights to the same variables every time. An expert applies different weights depending on mood, fatigue, order of information, and whatever anchor happened to present itself.

> "A formula that combines these predictors with equal weights is likely to be just as accurate in predicting new cases as the multiple-regression formula." — Thinking, Fast and Slow, Ch 21

Cross-Library Connections

Hughes's behavioral profiling in Six-Minute X-Ray uses roughly equal weighting across quadrant indicators, needs hierarchy signals, and behavioral markers. The system works not because any single variable is perfectly weighted but because the combination of multiple variables, applied consistently, produces reliable assessments.

The Implementation Playbook

Hiring Scorecard: Define 6 traits. Score each 1-5. Sum the scores. Use the total for ranking candidates. Don't try to weight "technical skill" at 3× "communication" — equal weighting performs just as well and is far simpler to implement and maintain.

Investment Screening: Define 5-8 factors (market size, team, unit economics, moat, growth rate). Score each 1-5. Equal-weight the sum. The composite score will outperform your intuitive ranking — and it takes 5 minutes per deal.

Vendor Selection: Define evaluation criteria. Score each equally. Sum. Select the vendor with the highest score. Resist the temptation to weight criteria differently unless you have strong empirical evidence for the weights.

Customer Scoring: Lead quality = sum of equally-weighted signals (company size, engagement level, budget authority, timeline urgency, need intensity). The equal-weighted composite will outperform sales reps' intuitive qualification in virtually all cases.

Key Takeaway

Equal-weighting formulas are the ultimate "good enough" decision tool. They sacrifice theoretical optimality for practical robustness — and in the real world, robustness wins. If you know which variables matter, you already have everything you need. Weight them equally, apply the formula consistently, and you'll outperform expert judgment without a single regression analysis.

Continue Exploring

[[Algorithms vs. Experts]] — The broader finding that structured methods outperform holistic judgment

[[Apgar Score]] — The most famous equal-weighting formula in medicine

[[Structured Interview Protocol]] — Kahneman's hiring protocol, which uses roughly equal-weighted traits


📚 From Thinking, Fast and Slow by Daniel Kahneman — Get the book