As you’ve no doubt noticed, the big Clinton leads of early and mid-August have faded to a significantly tighter race in the beginning of September. That point was (for Clinton supporters) unpleasantly punctuated yesterday morning by a CNN/ORC poll which showed Trump leading Clinton by 1 point in a direct head to head match-up. There were actually four polls yesterday: Trump +1, Clinton +2, Clinton +3 and Clinton +6. We don’t know yet whether the CNN poll is an outlier or the trend. (Another poll out this morning has Clinton +2). But there’s another point I want to flag about the instability of the polls – specifically the big multi-state surveys from Ipsos-Reuters and WaPo/SurveyMonkey.
Quite apart from the tightening of the race, our ability to make sense of the race is being pretty seriously affected by a methodological fissure about polling methodology itself. I’ve mentioned a few times that in recent weeks there’s been a pretty marked difference between prestige national phone/cell phone polls and the increasingly ubiquitous online polls. Until the CNN/ORC poll, the former had shown a fairly stable Clinton lead whereas the online polls showed a dramatically closer race. But there’s been an additional development in the last couple weeks. Both Ipsos/Reuters and WaPo/Survey Monkey have released 50 state polls giving a measure of the horse race in most or all states. These appear to be fairly similar in methodology to many online polls – non-probability/opt-in samples, collected over a long period of time, which are then weighted to get samples which match the demography of a given state.
In principle, this sounds like a reasonable approach and is similar to many online polls which by and large have had a pretty good record in recent cycles. (I’ve tended to be a big supporter of online polls.) But a lot of these polls seem quite simply off. Take Michigan. Reuters/Ipsos says Trump and Clinton are tied at 42%. But during the same period Emerson, which released a very Trump leaning set of polls based on landlines only, gave Clinton a 5 point lead. That may just be standard variance between any two polls. 5 points isn’t that different. But the Ipsos state polls all seem more Trump leaning than most phone polls or actually any other polls during the same period of time.
The weirdness is even more extreme with the WaPo/Survey Monkey sample. This poll shows Clinton leading by 2 points in Michigan and 1 point in Texas. You could speculate that Michigan has a large post-industrial white working class and Trump is surprisingly strong there and Texas, has a large Hispanic community which is making Hillary competitive. Maybe. But it’s really hard to figure either of those numbers – especially Texas being tied based on any history or recent poll numbers. As we noted with Ipsos/Reuters, there are other polls showing Michigan close. So on its own that’s plausible. But Texas too? I doubt it. What’s more, is it really possible that Texas is tied and Trump is ahead or tied nationally? That’s pretty hard to believe.
We don’t have many polls of Texas this cycle since it’s such a red state. But even during Clinton’s biggest leads in early August traditional polls showed Trump with a healthy lead in the state.
Consider some other WaPo/SurveyMonkey numbers. Trump is only ahead by two in Mississippi. But he’s ahead by 21 points in Alabama.
It’s extremely difficult to believe that number is right for Mississippi and equally hard for me at least to believe the fairly politically and demographically similar state of Alabama next door is that different. If you look at the particular results in all the states, you see lots of results that seem a bit odd but definitely possible. But I’m focusing on ones that frankly seem implausible enough to make me doubt the methodology itself. Are Texas, Mississippi, Wisconsin, Michigan and Colorado all essentially tied? That simply doesn’t sound right. It’s a little incredible on both sides. But it seems especially off in Clinton’s favor.
Just so we’re clear here. This isn’t saying that the polls are unfair to Clinton. Looking at both these methodologically similar 50 state surveys together they seem pretty off in both directions. Not ‘biased’ but erratic and with a significant number of results that just don’t seem plausible – which is what you’d expect with a methodology that was off in some basic way. Just eyeballing it they also seem to show what seems like a surprisingly large number of close calls or near ties. Why this is, I don’t know. Again, online polls in the last couple cycles have marked up a pretty solid record. My best guess is that these massive number, trawling type polling efforts have methodological kinks in them that just haven’t been worked out yet with how they weight samples. I’m not experienced enough with the methodology to say why, but too many of the results just don’t add up – not by history or contemporaneous or near contemporaneous polls.