Prediction Farce: Why Expert Forecasters Are No Better Than Random and What Traders Should Do Instead

This PDF book review outlines the case against “prediction”. An excerpt:

“Prediction is one of the pleasures of life. Conversation would wither without it. ‘It won’t last. She’ll dump him in a month.'” If you’re wrong, no one will call you on it, because being right or wrong isn’t really the point. The point is that you think he’s not worthy of her, and the prediction is just a way of enhancing your judgment with a pleasant prevision of doom. Unless you’re putting money on it, nothing is at stake except your reputation for wisdom in matters of the heart. If a month goes by and they’re still together, the deadline can be extended without penalty. ‘She’ll leave him, trust me. It’s only a matter of time.’ They get married: ‘Funny things happen. You never know.’ You still weren’t wrong. Either the marriage is a bad one – you erred in the right direction – or you got beaten by a low-probability outcome. It is the somewhat gratifying lesson of Philip Tetlock’s new book, “Expert Political Judgment: How Good Is It? How Can We Know?“, that people who make prediction their business – people who appear as experts on television, get quoted in newspaper articles, advise governments and businesses, and participate in punditry roundtables – are no better than the rest of us. When they’re wrong, they’re rarely held accountable, and they rarely admit it, either. They insist that they were just off on timing, or blindsided by an improbable event, or almost right, or wrong for the right reasons. They have the same repertoire of self-justifications that everyone has, and are no more inclined than anyone else to revise their beliefs about the way the world works, or ought to work, just because they made a mistake. No one is paying you for your gratuitous opinions about other people, but the experts are being paid, and Tetlock claims that the better known and more frequently quoted they are, the less reliable their guesses about the future are likely to be. The accuracy of an expert’s predictions actually has an inverse relationship to his or her self-confidence, renown, and, beyond a certain point, depth of knowledge. People who follow current events by reading the papers and newsmagazines regularly can guess what is likely to happen about as accurately as the specialists whom the papers quote. Our system of expertise is completely inside out: it rewards bad judgments over good ones.”

What Tetlock’s Research Means for Traders

Tetlock’s finding that the accuracy of expert predictions has an inverse relationship to the expert’s renown and self-confidence is the empirical version of what TurtleTrader has been documenting in the financial markets for years. The most prominent market forecasters, those with the most media appearances, the most confident tone, and the deepest institutional credibility, are the ones whose predictions are least reliable. The system rewards the appearance of expertise over its substance.

The social dynamics Tetlock describes in the opening paragraph are identical to those that sustain financial market forecasters. The analyst who said the market would rise and was wrong does not admit it. They were off on timing. The event they didn’t anticipate was improbable. They were almost right. The market will eventually prove them correct. The same repertoire of self-justifications that protects social predictions from accountability protects professional financial predictions from the same reckoning. Nobody systematically tracks the record. The forecaster moves on to the next prediction.

This is precisely the environment that creates the opportunity for trend following. If predictions are systematically unreliable, and if the system rewards confident prediction over accurate prediction, then the market is populated with participants making decisions based on forecasts that are no better than random. Those participants are the counterparty trend following needs: people who buy based on bullish forecasts and sell based on bearish forecasts, regardless of what price is actually doing. When the forecast proves wrong and price moves against them, they either hold because the self-justification says the market will eventually prove them right, or they exit at a loss. Both behaviors produce the price movements that systematic trend following captures.

Trend following’s explicit rejection of prediction is not a philosophical position. It is the practical consequence of Tetlock’s finding. If expert predictions are no better than random, and if self-confidence and renown make them worse rather than better, then building a trading approach on market predictions is building on a foundation that research has demonstrated to be unreliable. The alternative, reading what price is actually doing and following it with defined rules, requires no prediction and therefore is not subject to the failure mode Tetlock documents.

Frequently Asked Questions

What did Philip Tetlock find about expert predictions?

In his research published in Expert Political Judgment, Tetlock found that professional experts who make predictions for a living are no more accurate than casual observers, and that the most prominent, confident, and frequently quoted experts are actually less accurate than their less prominent peers. The accuracy of expert predictions has an inverse relationship to the expert’s self-confidence and renown. The system that rewards prediction expertise does so based on the quality of presentation rather than the quality of the predictions themselves.

Why are forecasters rarely held accountable for wrong predictions?

Because the prediction culture provides a ready supply of self-justifications that convert wrong predictions into almost-right predictions or unlucky predictions rather than simply wrong ones. The expert was off on timing, blindsided by a low-probability event, wrong for the right reasons, or will eventually be proven correct. This repertoire of excuses prevents the accountability mechanism that would require predictions to be evaluated on their actual accuracy rather than on the confidence with which they were delivered.

How does Tetlock’s research support the trend following approach?

By establishing empirically that market and economic predictions by experts are unreliable, and that more prominent and self-confident experts are less accurate. If this is true of expert predictions, building a trading approach on market forecasts is building on an unreliable foundation. Trend following’s explicit refusal to predict, reading price and following it with rules, is the practical response to this documented unreliability. It requires no prediction and therefore is not subject to the accountability gap that Tetlock identifies.

Trend Following Systems
Want to learn more and start trading trend following systems? Start here.