Theory says Nate Silver is full of crap

10/28/2014 08:54:00 AM
Tweetable
...not just Nate Silver.

I keep seeing these Senate forecasts all over, and can't make heads or tails of them:
"Probability" of GOP winning control of Congress, according to several experts.
Now I'm a fairly competent forecaster, and I don't know what that means. You can't observe a probability of something, and therefore you can't forecast the probability of something. Matt Yglesias thought he was waxing philosophical when he pointed this out, but it's actually a rigorous statistical critique--the "probabilities" we compute in statistical procedures are nothing more than artifacts of the procedure used, not real world objects that can be forecast.

The Bayesians have all finally gone crazy. A forecast consists of a range of outcomes at a given, pre-determined probability level. For example, "we are 95% confident that the GOP will win between 45 and 55 Senate seats" is a forecast. This claim is falsifiable--if the GOP gets fewer than 45 seats or more than 55 seats our forecast is incorrect. This claim has a margin of error. This claim is scientific.

The pollsters, by contrast, have all gotten it exactly backwards by picking a pre-determined outcome--GOP gets 51 Senate seats--and giving us a probability. How do we validate such a claim? We can't even falsify it, because if republicans don't take control of Congress, these pollsters will all simply say their probabilities were correct and that something improbable happened. We aren't even given a margin of error, which is all the more troubling given the huge range--from 55% to 93%--of forecast probabilities!

At the same time, it's obvious why they are reporting probabilities instead of forecasts. It's because, with the possible exception of the Washington Post, none of these forecasts are statistically significant. Science typically requires 95% confidence (or, in the Bayesian world, "probability") in a prediction to be significant. None of these models produces a statistically significant prediction of who will control congress.

A recent paper in AER explains why "experts" would make such a vacuous prediction. They present a model of an "expert" who makes forecasts about the sequence of future events, and a client who tests these forecasts against the data. Even though the client chooses how to test the expert's predictions, it is usually possible for the expert to make a forecast to pass the test even when the expert actually has no expertise whatsoever. In our case, the implicit "test" of the forecasts is whether the actual outcome was likely given their forecasts, which is pretty darn easy to pass even without looking at any polls--the standard scientific standard for an unlikely event is a probability less than 5%, so all of these models are still saying that a non-GOP Congress is a likely outcome. They can't be falsified. Even if we change the significance threshold, the pollsters could simply change their forecasts so that they still can't be falsified.

The alternative to this is claim validation. The AER shows that if you really want to know who is full of BS, make them validate their claims. Make them give an actual prediction of how many GOP congressmen and Senators there will be, and if the actual number is not in their predicted range, call them out for being wrong. Although this is not strictly logical--one can falsify but never truly validate--the authors show that the fact that forecasts are endogenous to the tests we apply to them means that only a standard of validation can actually force forecasters to produce truly falsifiable claims.

Until then, I'm calling them out. All the pollsters are full of crap. That isn't how you do statistics.
Anonymous 10/28/2014 05:31:00 PM
Awesome post!