The "approval" dataset is based on polls from numerous pollsters, each of whom ask some version of do you "approve or disapprove" of how candidate X is doing his or her job. The wording can be slightly different. But we only include polls that match that general format. In some cases, one pollster will ask one standard "approval" question and another that is similar but different. In that case, we track the more standard of the two. The key is consistency over time with each pollster and consistency of questions among different pollsters.
Pew has a standard approval question they ask every month. The wording is ...
Q. Do you approve or disapprove of the way Barack Obama is handling his job as president?
They have recently started a separate poll commissioned by National Journal, which asks this question ...
Now I'd like to ask your impression of some groups and individuals. Would you say [INSERT ITEM; RANDOMIZE] is doing an excellent, good, only fair, or poor job?
We've included the first in our approval dataset for years. The latter we decided -- for the reasons I note above -- not to include when it was first published in May. Pollster.com and RealClearPolitics, the two other operations that keep an Obama Approval average, also do not.
The nickel version of the story is that it's comparing apples and oranges. And the whole point of exercise is getting an apples to apples comparison.
Now, because the question is worded differently, it yields a dramatically different answer. Pew's last approval number came out on June 24th and showed 48% of Americans approving and 43% disapproving, which is a +5 spread. This other question had 40% either saying Obama was doing an "excellent" or "good" job and 56% saying he was doing an "only fair" or "poor" job. If you jam that into the approve/disapprove dichotomy it's a -16 spread.
In other words, big difference.
So how did it get in? Simple. It was a mistake. And as soon as we found it -- about an hour or so later -- we removed it.