Only partly to toot our own horn, our Polltracker final predictions were close to identical. He got Florida, which we barely missed. But we got a couple Senate races right that he missed. After the election the Obama campaign did a post-mortem on their own secret, internal poll analysis and prediction and they found that they beat all the polling aggregation and prediction services. They gave the results to BusinessWeek, which you can see here.
The average error of each were as follows ...
Now, it's a point of no little pride that we came in a very, very close second to Nate and were clearly more accurate than our two big competitors in the space (this was not an accident - a point I'll return to). But again, in the big picture, the various poll aggregating systems ended up with pretty similar results. As Matt Yglesias notes here, Nate's 'secret sauce' is pretty much just what you get when you methodologically watch the polls and turn off the volume on the pundit chatter.
I'll give you an example. Back in 1998, it was treated as a foregone conclusion that Bill Clinton might been able to hold on to his office but his party was going to pay the price for his indiscretions in the November mid-term. Newt Gingrich was out with predictions that they were going to get like 40 more seats on top of their existing majority. Every pundit thought the Democrats were going to get crushed. When you pointed to polls that didn't really support this assumption, you got the pat answer that, well, sure the polls don't show it but those evangelicals and Clinton haters are just so pissed by this that the polls are not picking up their 'enthusiasm' and 'intensity'. We know what happened. Democrats managed to pick up a few seats and in part as a result Newt Gingrich was toppled as Speaker.
As it happens, that was my first election cycle as a working political journalist. I kept saying the Democrats would do okay. And I was vindicated. But only because I was only looking at the polls. And my ability to solely watch the polls was probably heavily influenced by the fact that in this case I wanted them to be true.
But the whole experience left me thinking of polls through the prism of flight and instrument controls. Advanced pilots, once they're approved to fly on instruments only, are taught to disregard all their sensory impressions and only focus on their instrument panel. In bad weather or other conditions your senses give you too many false impressions. They frequently tell you the exact opposite of what is true, sometimes with fatal results. A lot of punditry is analogous to all the wrong information a pilot's senses are giving her in a bad weather or other arduous situation.
But let me come back to Silver. I don't mean to diminish his feat. I just think people are focused on the wrong part. The fact that Silver's numbers were so good at the very end is not that big a thing. Others came up with pretty much the same stuff. But as Silver would say himself, as his models converge on election day they give greater and greater weight to the actual polls and less and less to economic data, historical data and whatever else he figures into his system. So the fact his model pretty much called it on election day isn't that big a thing to me; the fact that he pretty much called it six months or 9 months before, based on a system factoring in lots beside polls, is a much bigger one.