In the wake of Obama‘s re-election, people are going to spend a lot of time first crowing over the success of Nate Silver‘s election forecasting at FiveThirtyEight.com, then telling us all why he didn’t do that good of a job. The point is not that Nate Silver is a genius. The point is that these methodologies can be tested. We can see how they perform. Then we can tweak them and see if they perform better. As a whole, they are not going to get worse. And, these statistical methodologies are slowly creeping into the public view.
The pundits don’t want this to happen. They make a killing saying things that can’t get checked. They don’t have to update their methods. Accountability is anathema to pundits. One’s “gut” is not amenable to validation.
We can see this in baseball. We all know (among the set of people who care about baseball) that the “Moneyball” or sabermetrics approach is more effective than traditional methods of evaluating talent, which is more effective than the random citing of statistics used by play-by-play analysts.
Nate Silver’s forecasting was not the only coherent system for analyzing the election, nor was it the most accurate. The publicity afforded by his association with The New York Times made his predictions the test case for legitimate math and reason. Math won. It always wins.
*Paul Raeburn says essentially the same thing first, here.