The “Seersucker” Paradox
As election season fades into the background for 2010, political pundits have moved from predicting the midterm winners and losers to forecasting the resulting legislative impact. And regardless of the topic or their track record of success, these soothsayers exude confidence in their prognostications.
It is the conviction behind forecasts such as these that compelled Philip Tetlock, a psychologist at the University of California, Berkeley, to conduct a two-decade long study of expert predictions1. He recruited 284 people whose professions included “commenting or offering advice on political or economic trends.” He asked them to forecast the probability of various events and outcomes, including areas both within and outside their particular area of expertise. By 2003, he had accumulated a database of 82,361 forecasts from which to evaluate the accuracy of the experts.
What did he find? In short, the experts performed worse than random chance. Though 96% of the “experts” had post-graduate training, they still were unable to accurately predict the future. There is no overcoming the fact that human beings cannot consistently call stock market moves, fortell political or geopolitical events, or successfully forecast changes in interest rates or commodity prices – the world is simply too complex.
While Tetlock’s results are not surprising, the appetite for such forecasts remains strong as the “seersucker theory” holds: no matter how much evidence exists that seers do not exist, suckers will pay for the existence of seers. And, as Tetlock points out, the media attention is greatest on those whose forecasts are most dramatic – whether optimistic or pessimistic. In the business of forecasting, it pays to stand out from the crowd.
Inevitably, some predictions will ultimately ring true – the mistake lies in attributing these successes to prescience rather than chance. As legendary economist Milton Friedman concluded, “If you’re going to predict, predict often.”
1The results of Tetlock’s work are detailed in his book “Expert Political Judgment: How Good Is It? How Can We Know?”