The Signal and the Noise: Why So Many Predictions Fail — but Some Don’t, by Nate Silver.

Nate Silver is now an election blogger for the New York Times, but started out doing baseball statistics in his spare time while working for KPMG. That became PECOTA, a system for forecasting the performance and career development of Major League Baseball players. In the meantime, he also amused himself by analysing poll statistics and predicting US election results. I wrote about his electoral statistics here.

Despite his very public success at prediction, his book is a cautionary tale about the failure of many predictions.

Data-driven predictions can succeed – and they can fail. It is when we deny our role in the process that the odds of failure rise. Before we demand more of our data, we need to demand more of ourselves.

He points out that one of the ways we humans have evolved is to find patterns – something that mostly serves us well.

We are wired to detect patterns and respond to opportunities and threats without much hesitation… Our biological instincts are not always very well adapted to the information rich modern world. Unless we work actively to become aware of the biases we introduce, the returns to additional information may be minimal – or diminishing.

So Silver’s book has a two part structure. He sets out the problems; the kind of predictions that work and those that don’t, and then sets out his solution; how predictive skills are massively enhanced by thinking rigorously, particularly by using the concepts embodied in Bayesian statistics.

Silver first of all describes the many ways in which predictions fail, and what we can learn from that experience. A few examples of failure include,

  • predicting terrorist events (such as September 11), 
  • predicting earthquakes – either the timing of the earthquake, or the size of the biggest possible earthquake, and even medical research, where recent analysis suggests that most published research findings are false.

Silver talks about how important it is to test your predictions, and learn from mistakes. He gives the positive example of weather forecasting, which has got significantly better in the last 20 years, as scientists build complex models, and test their predictions to improve the models. His less positive example is pundit economists who are asked to predict various economic variables; when tested, they rarely do better than random outcomes. And they don’t spend nearly as much time reviewing the success of their predictions.

He also talks about the importance of understanding how good your prediction is; giving the example of a town where the predicted maximum flood was 49 feet; so they built the retaining wall to 50 feet. Sadly the margin of error of the estimate (based on previous flood predictions) was nine feet. So the chances were around 30% that the retaining wall wouldn’t be good enough.

Apart from rigorous testing and understanding the margin of error, his other main prescription for forecasters is to use the Bayesian concept of prior probabilities. This is also something Daniel Kahneman spends a bit of time on. In making a prediction, based on new information, you should start with an initial view, then understand how much your new information improves the prediction (is the information all that relevant and/or reliable?) and then update your view based on the relative usefulness of your two pieces of information. But many people disregard the prior information (the weight of all previous opinion polls) for the new and exciting information (such as a new opinion poll), even if that new information is quite unreliable.

Silver illustrates this section with a number of examples from poker games and sports betting, with the tantalising view of those who managed to make big money from a good understanding of statistics (as well as a lot of self-discipline to keep to the statistics and away from the emotion).

Although in theory, those who spend their lives, as many actuaries do, in modelling and prediction, should already know most of what is in this book, it is an important read. It provides some useful reminders of the many different errors professional prognosticators can make; it would be surprising if one or two actuaries hadn’t fallen victim to them over the years. It also has some great accessible examples to use when explaining the kind of forecasting we do to non actuaries, particularly when trying to get the right balance between paying attention to the latest data, and making a long term projection.

Highly recommended.