Those Internal Polls

Senate Majority Leader Harry Reid
Start your day with TPM.
Sign up for the Morning Memo newsletter

Mark Blumenthal’s got a very interesting piece up at Huffpo talking specifically about how off the polls were in Nevada but also making a broader point about the quality of public polling. Remember, the Nevada race was heavily polled. And the results showed the two tied and then with Angle opening a consistent lead of 2 or 3 percentage points. Reid didn’t just win. He won by 5 points. Not even close, really.

Most people have assumed that Reid just had such an amazing field operation and shoe-leathered it out, managing to dramatically outperform the polls. In other words, somehow Reid’s folks just managed to grab a bunch of non-likely voters, got them out of bed and hauled them to the polls. And to be clear, I’d include myself among the folks thinking this way.

But apparently that’s not what happened.

During a campaign we tend to think of released internal polls as crap. And the real thing is the public polls. But of course, it’s only the released ones that it’s right to be skeptical of. The internal polling itself is probably quite good. Because no one has a bigger interest in accuracy than the campaign itself. And the campaign’s pollsters get to work with data provided by the campaign. So they should know much more about the contours of the race than some national pollster that drops into a state every few weeks while they’re conducting polls in all sorts of other places.

In any case, here’s the story. Neither side’s internal polls in Nevada were showing what the public polls were showing. And they weren’t far off: Neck and neck but with at least some advantage to Reid. Both sides internal polling apparently showed results that were pretty consistent with the final result.

So why the difference? The difference seems to be much more aggressive modeling, call-back persistence, cell phone calling and a lot of other stuff that’s really just old school, phone survey due diligence. Who does or doesn’t answer the phone obviously throws a huge wild card into the data. And the only way to adjust for that — or the key way — is to have really good demographic models of the electorate. That allows you to ‘control’, at least to a degree, for variable response rates as well as just random noise.

During the campaign frequently you’ll see people dig into the internals of a poll and come up with something like, “Wow, this Nevada senate race has Hispanics voters splitting 50/50 between Reid and Angle. No way that’s true. There’s something wrong with poll.”

Often in cases like that you’re talking about a relatively small poll — say 500 voters. And the demographic group is a much smaller subgrouping. So you’re getting down to a sample size where the margin of error is really, really big. You can’t necessarily read that much into it. In other cases, we figure that with so much polling these variations sort of come out with the wash. And if all the polls are the same big picture you cannot ignore that.

But in the case of Nevada at least it looks like the public pollsters really were missing a number of younger voters, Hispanic voters and just voters in general who, for a number of reasons, were much harder to reach. And those voters heavily favored Harry Reid. In theory, a pollster can still overcome some of that sample bias by controlling and ‘weighting’ for the different underlying demographic groups. But it looks like a lot of the public pollsters just didn’t do that as consistently or as aggressively as they should have.

I haven’t had a chance yet to look over the race results against the polling data systematically across the country. There were at least a few cases of polls significantly underestimating Democratic strength. The Pennsylvania senate race is the one that comes to mind to me.

If the Reid result had been replicated across the country, it would be a devastating verdict on the state of public polling today. But it didn’t. By and large, looking across the entire country, the verdict has to be that the public polls got it pretty much right. A bit underestimating Democratic strength on the Senate side and a bit overestimating it on the House side. Overall, what the polls told us to expect is what we got. But perhaps it points to growing problems cropping up on the margins with all the different forces conspiring against accurate public opinion data — pervasive cell phone use, discount robopolling and all the rest of it.

Latest Editors' Blog
Masthead Masthead
Founder & Editor-in-Chief:
Executive Editor:
Managing Editor:
Associate Editor:
Editor at Large:
General Counsel:
Publisher:
Head of Product:
Director of Technology:
Associate Publisher:
Front End Developer:
Senior Designer: