Here is the final version of the trends and house effects model that I have been estimating over the course of the past month. It shows a one-time drop in support for the President after the first debate in Denver of about 3.2%. That compares to the model’s estimate that Mr. Obama would have had a 3.7% lead in the polls tomorrow had the President continued along the same path he was following prior to the election. With the debate effect included, the model predicts the President’s national level of support among likely voters tomorrow at a mere 0.5%.
Notice, though, that the most recent polls appear slightly more favorable to the President. The model suggests that the President may have doubled that margin after Hurricane Sandy hit the Northeast. Polls whose fieldwork began on or after October 31st appear to give the President another half point on average, but the estimate fails to reach conventional levels of statistical significance.
We now have sufficient data to get reasonably stable estimates of partisan house effects. For this report, I have applied more stringent criteria in the identification of these effects which are described in the technical post. Applying those criteria gives us four pollsters whose estimates clearly favored one or the other parties compared to the consensus. Gallup, Rasmussen and ARG all leaned Republican while DemocracyCorps tilted Democratic.
I invested more effort than in earlier installments trying to determine whether any of the three common methods of interviewing — in-person telephone, automated telephone, and over the Internet — advantaged one party or the other. Remember that pollsters who rely on automated systems cannot call cell phones, so the difference between the two telephone methods should cast some light on the concerns about excluding cell-phone users from polls.
At first glance, if only the polling methods are included and no house effects estimated, then automated methods appear to lean Republican by somewhat over two percent. However one of these pollsters who rely on automated methods is Rasmussen. If I estimate a unique house effect for that firm, I find no statistically significant effect for automated methods in either direction.
With the exception of one poll by Gravis Marketing, all the other automated polls were conducted by Public Policy Polling for one of two clients. So the finding of no significant deviation for automated methods is really a tribute to PPP who appear to have developed methods that produce unbiased results even though the firm relies on calling landlines.
By implication there is no evidence that calling only landlines necessarily under-represents Democratic-leaning younger and minority voters who rely on cell phones more than do older whites. It appears that people who rely on cell phones have similar political opinions to those held by their demographic peers who rely on landlines. Appropriate demographic weighting appears sufficient to compensate for the exclusion of cell-phone users.
Internet polling is a different matter. Three organizations conduct Internet polls — Ipsos/Reuters, JZAnalytics, and YouGov/Economist. I took a similar route to my analysis of automated methods, including or excluding one or the other of these pollsters in hopes of eliminating the Internet effect. Unlike in the case of Rasmussen, including unique house effects for these pollsters showed little difference among them. All three firms using Internet polling show an average pro-Democratic tilt of about one percentage point.