Thursday, December 18, 2014

The Signal and the Noise by Nate Silver (Book Review #120 of 2014)


The Signal and the Noise: Why So Many Predictions Fail — but Some Don't
Tim Geithner cited this book in his memoir so I know Silver's writing has been influential on at least one policy maker. I read this book partially out of curiousity over how smart Silver really is when it comes to economics and statistics. This book came after Silver's FiveThirtyEight.com blog successfully predicted Presidential and Senate races and publishers wanted to capitalize on nerdy books like Michael Lewis' Moneyball. While Silver's election forecasting has been lauded, I never found it much more than novel-- he explains in the book that he simply took an average of others' forecasts, weighted by their past accuracy. The digging through data was more impressive to me than the results. I strongly encourage this book to anyone interested in forecasting, especially as applied to economics and policy making in areas such as climate change and financial regulation. It's also a good read for those starting a business or for CEOs looking to push back on their internal forecasters. Prerequisites before reading it are Michael Lewis' Moneyball and The Big Short; Bob Schiller's Irrational Exuberance, Daniel Kahneman's Thinking Fast and Slow; and I would also recommend Benoit Mandelbroit's (Mis)Behavior of Markets. I would also recommend the climate change chapters in Dubner & Levitt's SuperFreakonomics.

Silver intends the book to be an investigation of various data-driven predictions. He is also proselytizing in the name of Bayesian analysis with the goal of leading the reader think more probabilistically. Silver writes that we can all improve our predictions by adjusting them when new information arises. This may seem like common sense, but I forecast for a budget office that has to project quarterly tax revenue two years in advance and doesn't have the luxury of regularly updating the published forecast when new information comes in (a real problem when the average retail price of gasoline comes in over $1/gallon below what anyone was forecasting even a year ago). it takes both courage and humility to be Bayesian when our media culture often hammers people for "flip-flopping" on issues. Bayesian thinking uses prior estimates as a starting point, and changing them as you encounter new information.

Perhaps what I like most about this book are the interviews Silver conducts with people ranging from NASA scientists to economists to Donald Rumsfeld. He converses with Justin Wolfers over his critiques of Silver's predictions at FiveThirtyEight.com. He talks with forecasters about their forecasts, theories, problems, etc. even though he already knows a lot about the field. I work as an economic forecaster for state government, and I see the best practices, the most common mistakes, and the heuristic biases that Silver describes in detail.

Silver begins with a seemingly odd-fitting hypothesis: as Gutenberg's printing press made books and knowledge more widespread, conflict increased as people felt they had more control over their own destinies. As we have more information/data, we know less of what to do with it. We pick and choose which data we prefer and become more tribal, more hostile to other tribes who focus on a different set of data.

The terms forecasting and prediction are currently used interchangeably but had subtly different meanings with theological implications in the Middle Ages. Even today, seismologists say earthquakes cannot be predicted, as "predict" means a set time and date. But they can be forecasted, meaning that a forecast is a probability of an event, usually over a range of time. Forecasts are made in uncertainty. The U.S. has a "prediction addiction" and a prediction problem. Predictions for seemingly important series--like GDP, inflation, and unemployment-- have been wildly inaccurate. The important economic variables most frequently forecast tend to be consistently wrong. Silver recounts the housing bubble and 2007 financial crisis where CDOs were being AAA rated by ratings agencies who should have known better. Some economists acknowledged the housing bubble but did not accurately predict the consequences of its bursting. In this lengthy section, Silver cites Schiller, Rogoff & Reinhart, Larry Summers, Dean Baker, Paul Krugman, and others. Silver chalks up the ratings agencies' errors to the common forecasting error of not having a large enough sample size, making later observations appear much more improbable than they should be. Many of Wall Street's forecasters' models only went back to the 1980s, and missed the simple fact that real housing prices did not appreciate very much over the long-haul, not to mention several recessions in American history.

Silver performed an amusing survey of The McLaughlin Group's weekly forecasts and found them to be no better than flipping a coin. He looks at how experts in various fields tend to be inaccurate in their forecasts. There are "foxes" who know something about a lot of things and "hedgehogs" who know one big thing. Hedgehogs make good TV guests but are not as good at predicting, studies have shown, as foxes. (The most recent example of this I've seen was a finding that various ivy league experts' predictions on Russia and foreign policy were more inaccurate in their predictions of Russian aggression toward Crimea than less-credentialed experts or experts in other fields.) Silver remarks that good forecasts are not just purely data-driven, more and better data help but sometimes not all that much. In politics, an incumbent running in a district solid for his party might suddenly be trounced at the news of infidelity or corruption, something a purely statistical model wouldn't predict.

Silver cut his forecasting teeth on Major League Baseball, designing a system (PECOTA) to forecast draft picks and minor leaguers' potential output. PECOTA did okay against scouts but not fabulous, and Silver sold the system to Baseball Prospectus while he went on to publish books and start FiveThirtyEight.com. It's easy to conclude that a little bit of computer know-how can give you a huge advantage, but Silver states that's not what he's intending to say. Better models may help you at the margin, but like any business, forecasting is competitive and people will adjust and take away your advantages.

What is needed is a good harmony between man and machine (Tyler Cowen picked up this theme in his recent book Average is Over). Algorithms cannot replace humans at forecasting completely, at least not anytime soon. Silver gives the example of weather forecasting, stating that humans add about 25% accuracy to computer models simply by using their eyes to identify outliers on the weather map, faster than computers or t-tests can. He evaluates the forecasts of NOAA and The Weather Channel, noting that as the U.S. government nicely provides weather data for free, for-profit forecasters compete in terms of accuracy. But the perception of being accurate is the most important-- the incentive is ratings, not accuracy, after all.

The government also publishes free economic data but it is messy, noisy, and constantly subject to revision. Economic forecasters don't publish confidence intervals for their forecasts because they are "embarrassed." As an economic forecaster, I've always wondered why we don't publish such intervals but Silver explains the history here. Silver does explain the problem of "overfitting" in forecasts-- putting in too many independent variables to fit the curve or too often being "fooled by randomness" (an oddly sly allusion to Nassim Taleb, who Silver leaves out of the book... there must be some history between them).

Silver writes of a successful gambler on NBA games who has a statistical model but also watches most of the games and makes personal judgements about how the team is communicating with one another, the effort they're visibly putting out, etc. This theme leads to a long exposition of poker and how gamblers have to quickly calculate the odds of opponents' hands given what you hold, what she has done. A computer would be good at this, but not at the aspects of bluffing which anyone who has watched Star Trek:TNG knows.

There is a detailed look at Kasparov vs. IBM's Deep Blue. Chances are that your chess game will result in a position that has never before been played or recorded. Machines programmed with millions of pre-played game data run out of history after a few moves, and have to formulate a strategic analysis of the game. Silver learns in an interview that an undiscovered bug in Deep Blue's program threw off Kasparov's estimation of the computer's ability and strategy when he was analyzing the match results afterward. This resulted in Deep Blue ultimately shaking Kasparov. Kasparov thinks more like a poker player at times, trying to determine if Deep Blue has a "tell" or is bluffing.

Silver writes that the average forecaster is still probably good relative to average guy on the street and he makes this point looking at poker-player data. He played online poker as a slightly above-average player, making money. When later looking at data he realized that the bottom 10% of players were so bad that they were subsidizing the average players. When the bottom 10% dwindled, the previously average players like Silver became the bottom 10% and lost. Apparently, 52%  of online players have bachelor's degree, and are smarter than the average citizen who just buys a lottery ticket. This leads to overconfidence and a sense of entitlement, which Silver admitted to while playing. Poker takes more skill than roulette, but is still heavily dependent on luck. This segways into a comparison with stock traders who also suffer from hubris and a belief he/she is "above average."

Silver gives a good summary of Eugene Fama's efficient markets hypothesis and Richard Thaler's critique-- the "no free lunch" aspect versus the "price is always right" aspect. Silver channels Kahneman to describe how heuristics and biases affect buyers/sellers' forecasts. We should all be aware of our biases and working against them (Silver recommends Robin Hanson's blog for help) and purporting we have none shows we have many. Never trust a forecaster or scientists who states he has no biases.

From here, Silver looks at the enormously controversial yet important forecasting of climate change. While there is wide agreement among climatologists about the underlying theory, the warming trend, and causes, there is wide disagreement about the models used to forecast. This is important because the forecasts are often 30-100 years out and the margin for error quite high. There is contentious disagreement about the use of computer models. Scientists are dismissive of forecasters and models, where climate skeptic forecasters are dismissive of the science. Silver cautions that one should never trust a forecaster who is dismissive or ignorant of the underlying science behind the data he is forecasting, and never trust a scientist who is dismissive or ignorant of statistics and forecasting.

One problem with climate change over time, and betting on various models is that you can easily cherry-pick your start/end date to get a different result (are temperatures trending higher or lower?). Silver examines some of the forecasts and finds the IPCC's model (problematic for reasons he describes) as fairly accurate since the 1990s. Nonetheless, Bayesian analysis would suggest that people are correct to increase their skepticism about the warming trend in recent years, since the earth's temperatures have not warmed from 2004-2011. Each new data point should cause an adjustment of your forecast. Silver laments that we could have been having a debate about the uncertainties of the forecasts all these years, rather than a debate about whether the problem really exists.

Silver's faith in Bayesian analysis and lack of thinking through its logical conclusions is perhaps a weakness of the book. Bayesians like Silver say that our technological progress suggests further advancement is inevitable, and that we're converging on a point where we will seemingly be correct about everything; that we're evolving and will eventually achieve a progressive utopia. I'm reminded of Chris Hedges arguments against such thinking that quantum mechanics demonstrates some things will always be unknowable, and that world history shows no progress toward a utopia. There will always be randomness, there will always be noise mistaken for signal. Silver admits that the political polarization in America suggests our technological advancement is not inevitable. (He also touches on chaos theory throughout the book.)

An example of climatology frustration is that some simple ideas-- like putting sulfur into the atmosphere-- would seem to be something we can at least experiment with. Volcanoes give evidence that putting a small amount of sulfur into the atmosphere would likely reduce the greenhouse effect, but environmentalists clash with climatologists on the issue. Again, if our technological progress suggests further advancement is inevitable the political disputes and cognitive biases suggest otherwise.

Silver closes the book with a look at hindsight bias (although I don't think he uses that term). In hindsight, people wonder how the Pearl Harbor attack could have been a surprise. The silence in radio transmissions from Japan's carrier fleet should have been the signal in the noise. One definition of "noise" is not randomness, but multiple--too many-- signals, which is the problem with SIGINT. The FBI and NSA are constantly following up on leads they find to be false signals.

The conclusion of the book: Think probabilistically. Move from simplifications and approximations to more precise forecasts and statements when more data is collected.
Go from "investors cannot beat the market" to "most investors cannot beat the stock market relative to their risk and transaction costs. It is hard to tell if any can due to noise in the data." Work to reduce your biases: to say that you have none shows that you have many. "Try and err:" Make a lot of forecasts and evaluate them. "Distinguishing the signal from the noise requires both scientific knowledge and self-knowledge: the serenity to accept the things we cannot predict, the courage to predict the things we can, and the wisdom to know the difference."
   
I give the book 4.5 stars out of 5.

No comments: