Some Observations on Biden’s Margin in Presidential Polling

A simple trend model predicts Joe Biden will hold a lead in the polls between 9.8 and thirteen points on Election Day. Biden has increased his lead since the first of the year by a point every fifty days. Were Biden to win by the estimated eleven points, he would carry the Electoral College by 390-148.

Over the weekend I downloaded the complete set of presidential general election polls archived by FiveThirtyEight. For this post I’m going to concentrate my attention on national polls matching up Donald Trump and Joe Biden in head-to-head mock general elections.

Anyone paying attention to contemporary politics knows that Biden has led Trump in recent polling, but the extent of Biden’s margin is impressive when all the polls are taken together. Here are the 265 polls pitting Biden against Trump conducted since January 1st, 2020:

The average lead is about 6.5 points, but more commonly Biden leads by seven or eight points.

Let’s turn now to my standard model for polling data which I have used back to 2008.  This simple model combines the number of days left before the election and various characteristics like the population sampled, the polling method used, and measures of individual “house effects.”

The most significant results from these regressions are the constant term and the coefficient on days before the election.  First, the constant term predicts the size of Biden’s polling lead on election day, when the days before the election variable is zero. If current trends continued until the election, Joe Biden would have an eleven-point edge in national polling.  The standard error for this estimate of the constant is 0.81, meaning the likely range of margins for Biden would fall between 9.8 and thirteen percent.

The negative coefficient on days before the election means that statistically, since the first of the year, Joe Biden has been slowly gaining ground on Donald Trump. However, with a coefficient of just -0.02, it takes fifty days for Biden to gain a full point on Trump.  That comes to just under three points by election day.

In model (1), live phone polls show a small bias in favor of Biden. Some might read this as evidence of “shy” Trump voters who are unwilling to tell live interviewers their true preference for Trump but have no trouble doing so when they are using some form of automated polling. As it turns out, the effect for live interviews goes away once we account for individual pollsters’ “house effects.”  The same is true for the small pro-Trump effect seen for polls of registered voters. It too vanishes when we account of differences among pollsters.  All of the pollsters for which I find significant effects report figures more favorable to Trump compared to the consensus of all pollsters.

It’s hard to understate how big an eleven point lead would be. The implied two-party vote division of 55.5-45.5% would be the largest Democratic victory since Lyndon Johnson’s landslide over Barry Goldwater in 1964.  Given the historical relationship between the popular and Electoral College votes, a 55.5% win in the popular vote translates to a 72% victory in the College, or a margin of 390-148 Electoral Votes.

How Well Do Generic-Ballot Polls Predict House Elections?

I have compiled the results of generic-ballot polls taken near to an election and compared them to the actual division of the Congressional vote.  The table below presents the margin between support for the President’s party and support for the opposition.  For each election I have used about half-a-dozen polls from different agencies taken just before voting day.  Averaging the differences between these two quantities shows that these polls have fared rather well since 2002.  The average deviation in the four midterm elections is 0.6 percent; in Presidential years that falls to 0.1 percent.

Still these averages hide some fairly wide fluctuations.  In four of the eight elections the difference between the polls and the election results exceeds two percent.  The error was especially egregious in 2006 when the polls predicted nearly a fourteen-point Democratic margin compared to 8.6 percent in the election itself.

In the most recent election, 2016, the polls predicted a slight positive swing in favor of the Democrats, but the outcome went slightly in the opposite direction.  All the cases where the polls erred in picking the winner occurred in Presidential years and usually when the polling margin was close.  The four polls taken during midterm years all predicted the correct winner, though the size of the victory was off by more than three points in two of those elections.


The “Generic” Congressional Ballot Question

Democrats would lead by nearly fourteen points in “generic” Congressional ballot polls next November if the trends seen since Trump took office continue.

I have written earlier about how methodological differences among pollsters can lead to significantly different results.  In my analyses of Presidential approval I showed how Donald Trump’s approval ratings varied depending on the choice of sample to interview and the interviewing method chosen.  In this piece I apply the same approach to the so-called “generic” ballot question, typically “If the elections for Congress were being held today, which party’s candidate would you vote for in your Congressional district?”  Some pollsters mention the Democrats and Republicans in this question, others leave it more open-ended like the example I just gave.

I have focused on the net difference in support for generic Democratic and Republican candidates.  This ranges from a value of -4 (Republican support being four points greater than Democratic support) to a high of +18 in the Democrats’ direction.  Here is a simple time plot showing how support for the Democrats on this question has grown while Trump has held office.

The Democrats held a small lead of just under four points on the day Trump took office.  Since then the Democrats’ lead has slowly increased to an average of eight points.

What’s surprising about these data is that they do not show the usual methodological differences we see in the presidential series.  Here are a few regression experiments using my standard array of predictors.

Choice of polling method has no systematic relationship with Democratic support on the generic ballot question. In contrast, Trump’s job-approval ratings run one to two points higher in polls taken over the Internet.  Another striking difference is the greater level of support found for Democrats in polls of registered or “likely” voters.  Again, the job-approval polls show an opposite effect, with polls of voters displaying greater levels of support for Trump than polls that include all adults.  I have also included separate effect measures for the two most-common pollsters in this sample, Politico/Morning Consult and YouGov/Economist.  Job-approval polls taken by the former organization show a pro-Trump “bias” of about three percent; on the generic ballot their polls place Republican support about five points higher than other polls.  YouGov/Economist polls also have Republican tilt on this question, though they show a slight anti-Trump bias in job-approval polls.

If we extrapolate these results to the fall election on November 6th (655 days after the Inauguration), and include the effect for registered voters, the model predicts the Democrats’ lead in generic ballot polls would reach nearly fourteen percent (=4.07+2.62+0.011*655).  A margin that large would easily overwhelm the built-in advantage Republicans hold based on partisan self-selection and gerrymandering.  Even if the Politico figure is correct, adding in that pro-Republican factor brings Democratic support down to nine points on election day.  That result would still reach nine percent, or a Democratic/Republican split of about 54-45.  That 54 percent figure still exceeds the 53 percent minimum I estimated earlier would result in Democratic control of the House of Representatives.

Using the model for the relationship between seat and vote divisions presented earlier, a 57 percent margin in the national Congressional vote would translate into the Democrats’ winning 55 percent of the House seats for a margin of 239-196.

Some Observations on the Polling in Alabama

Alabama’s citizens will head to the polls to vote in the most competitive Senate election that state has seen for decades.  Democrat Doug Jones is trying to wrest the state into his party’s column while Republicans rally behind Roy Moore.  Most anyone reading this blog knows about the issues in this race, so I’m going to focus solely on the polls as archived at RealClearPolitics.  Since RCP does not publish data on polling methods, I examined the individual reports for each poll and, when the method used was unclear, contacted the polling agencies directly.

Polling results in this race have shown little convergence as we approach election day.  The Republican dominance in Alabama’s elections has meant that few national polling agencies have paid much attention to the state over the recent elections.  As a result, few national polling organizations have much experience surveying Alabama’s voters.  That has changed a bit as the election achieved national prominence, but still the vast majority of Alabama polls come from organizations will limited track records.  Over at FiveThirtyEight, Harry Enten observes that “Alabama polls have been volatile, this is an off-cycle special election with difficult-to-predict turnout, and there haven’t been many top-quality pollsters surveying the Alabama race.”

“Top-quality” pollsters rely on live interviewers making calls to both landline phones and cell phones.  FiveThirtyEight adds the additional criterion that the polling agency be involved with national organizations like the American Association for Public Opinion Research.  I will limit my analysis to just whether calls were made to a sample of cell phone owners.  As it turns out, this factor alone has a profound effect on a poll’s estimated margin between the candidates.

Here is a list of the available polls based on their method of interview.

Only one poll among those that included interviews with respondents via cell phone shows a lead for Moore; in contrast, only one of the landline-only polls puts Jones ahead.  The “swing” is quite substantial, about an eight-point differential based on the method used.

This difference arises from the much higher usage of cell phones by younger respondents who prefer Jones in most polls.  For instance, in today’s poll from Fox News likely voters under 45 year of age preferred Jones 59% to 28%, while voters above that age preferred Jones by only a one point margin, 45% to 44%.

I also modeled the difference in support between Jones and Moore using my standard predictors, time left before the election, and dummy variables for polling methods.  I also added a dummy which is coded one beginning on November 9th when the story about Moore’s alleged molestation of young girls was released in the Washington Post.  The variable measuring proximity to the election proved statistically insignificant, leaving just three dummies, whether the pollster made calls to cell phones, whether live interviewers were used, and whether the story had broken in the Post.

In polls taken before the publication of the molestation story, Jones trailed Moore by an average of eleven points.  Since then Jones has seen an average gain of six points, not enough on its own to return the race to even.  However polls that interviewed respondents via cell phones show a slightly larger difference of nearly seven points in Jones’s favor.  Taken together, these results suggest that Jones has averaged a 1-2 percent lead in polls taken since the Washington Post story that included calls to cell-phone users. (Update: Jones’s margin of victory over Moore was 1.5 percent statewide, right in line with this prediction.)

I also included a term for whether live interviewers were used.  Since all polls that include cell phone owners must use live interviewers by law, this remaining group represents organizations that polled only landline owners with human interviewers.  I find a small, though statistically insignificant (p<0.17) positive effect on Jones’s support from people surveyed by live interviewers.  It is hard to interpret what this effect might signify.  It could represent an unwillingness among Moore’s supporters to admit their intentions to a human interviewer but have no such hesitation when the interview is conducted by a robot.  If so, we might attribute some of the difference between cell phone and landline results to use of human interviewers in polls that include cell users.


Who Leads in the Swing States?

As in every Presidential election, the outcome will be determined by a very small number of states. As I did in 2012, I have compiled the polls in these “swing” states and counted up the number of times Hillary Clinton or Donald Trump was in the lead.  I have included every poll conducted so far that includes both candidates; the oldest poll was taken in late June of 2015.    I intend to update these results limiting them to only recent polls as the election nears.


Two states – Michigan and Pennsylvania – have supported Hillary Clinton consistently enough that there is just a small chance, less than one in twenty, the race is actually tied or she is behind Donald Trump in those states.  The other four states remain toss-ups.


Pennsylvania tempts Republicans to compete there every election cycle, and this one is no exception.  Still the state has trended Democrat in Presidential elections since the late 1960’s.


Race to the Bottom

As most everyone who follows politics knows by now, we enter the unprecedented 2016 Presidential election with the candidates of both major parties disliked by a majority of Americans.  In this posting I examine the trends in “favorability” for both Hillary Clinton and Donald Trump.

Using the data at Huffington Post Pollster I calculated the “net favorability” for each candidate, equal to the percent of respondents saying they view a candidate favorably versus the percent who say they view that candidate unfavorably. I begin with Hillary Clinton, for whom we have favorability data dating back to 2009.



It might be hard to imagine today, but during her tenure as Secretary of State in Barack Obama’s first term, Hillary Clinton was viewed quite positively by the American public. Between Fall, 2009. and Fall, 2012, about three out of five Americans surveyed reported that they viewed Secretary Clinton favorably.  Even as late as April, 2013, Clinton was favorably viewed by 64 percent of the adults surveyed by Gallup, compared to 31 percent who viewed her unfavorably.  That translates into a net score of +33 (=64-31) in the graph above. She would never attain that level of popularity again.

Opinions about Clinton did not fall right away after the attack on the U.S. Consulate in Benghazi, Libya, on September 11, 2012, but the downward trajectory began soon thereafter.  When she announced her candidacy for President on April 12, 2015, the proportion of Americans holding favorable and unfavorable views of Secretary Clinton were just about equal.  A few months later her favorability score was “underwater,” with the proportion of Americans holding unfavorable views outnumbering those with favorable ones by between ten and twenty percent.


Opinions about Donald Trump have also remained pretty constant, and consistently negative, since he announced his candidacy on June 12, 2015.   At no time since he began his campaign for President have more Americans reported feeling “favorable” toward Donald Trump than “unfavorable.”  His ratings improved somewhat after his announcement and through the summer of 2015, but when the primary campaign began in earnest starting in January of 2016, Trump saw his favorability score fall further south.  It has rebounded and levelled off since he became the presumptive nominee after winning the Indiana primary on March 26th.  Compared to Hillary Clinton’s ratings, though, Donald Trump’s net favorability score averages about -24 compared to her average net rating of -11.


If we now take the difference between these two net favorability scores, we can see whether both candidates are equally disliked, or whether one is disliked more than the other.  For most of the campaign so far, Hillary Clinton has been winning the contest over which of them is less disliked.  Her net favorability scores generally run around 11-12 percent less negative than Trump’s.  For instance, over the month of June, 2016, Clinton averaged 41 percent favorable versus 55 percent unfavorable, for a net favorability score of -14.  Trump’s scores were 35 percent favorable and 60 percent unfavorable, for a net score of -25, or eleven points worse than Clinton’s.


As you might expect, there is a strong correlation between this net favorability score and the proportion of respondents intending to vote for Clinton or Trump.  Net favorability alone explains about two-thirds of the variance in voting intention across the 113 polls where both questions were asked.  Given the relationship shown in the graph, a score of +11 in net favorability should yield about a five percent lead in voting intention.


One interesting finding from the regression results is that the constant term of 1.06 percent is significantly different from zero.  (It has a standard error of 0.38 with p<0.01.)  The constant predicts Clinton’s lead when net favorability is zero, or in a poll where the proportion of people favoring and disfavoring each candidate is identical.  When net favorability is zero, Clinton leads Trump on average by a bit over one percent.

Iowa: So Many Polls. So Few Respondents.

Pollsters have conducted over 44,000 interviews among Iowa’s 160,000 Republicans, but they probably interviewed just 15,000 unique people.  A majority of those polled took part in at least three interviews over the course of the campaign.

It seems like new Presidential polling figures are released every day.  We generally talk about each new poll as a unique snapshot of the campaign with some fresh sample of a few hundred caucus-goers.  That concept of polls might apply to national samples, but when polling in states as small as Iowa and New Hampshire, the number of eligible respondents is not that large compared to the number of interviews being conducted.

Making the problem worse is the falling “response rate” in polling, the proportion of eligible respondents who complete an interview.  Mobile phones, caller-ID, answering machines, all have enabled potential respondents to avoid the pollster’s call.  Pew reports that response rates have fallen from 21 percent to 9 percent just between 2006 and 2012.  If we assume a response rate of ten percent, only some 16,000 of Iowa’s 160,000 eligible Republican caucus-goers might have agreed to take part in a poll.

Huffington Post Pollster lists a total of 94 polls of Republican caucus-goers through today, January 31, 2016, constituting a total of 44,433 interviews.  As I have done before for New Hampshire, I will use this figure to see how the composition of the sample changes with different response rates.1

How large is the electorate being sampled?

Around 120,000 people participated in the Republican caucuses in 2008 and 2012.  While some observers think turnout in 2016 could be higher because of the influx of voters for Donald Trump, I have stuck with the historical trend and estimate Republican turnout in 2016 at just under 124,000 citizens.

To that baseline we have to add in people who agree to complete an interview but do not actually turn out for the caucuses.  In my 2012 analysis I added 20 percent to the estimated universe to account for these people, but recent findings from Pew suggest 30 percent inflation might be more accurate.  With rounding, I will thus use 160,000 as my estimate for the number of Iowa Republicans who might have been eligible to be polled about the 2016 Republican caucuses.

How low is the response rate?

Most of those 160,000 people will never take part in a poll.  Pew estimated 2012 “response rates,” the proportion of eligible respondents who actually complete an interview, in the neighborhood of 9 percent.  To see what this means for Iowa, here is a table that presents the average number of interviews a cooperating respondent would have conducted during the 2016 campaign at different response rates.  At a ten percent response rate like Pew reports, the 16,000 cooperating prospects would each need to complete an average of 2.78 interviews to reach the total of 44,433.


How many people gave how many interviews?

Finally, I’ll apply the Poisson distribution once again to estimate the number of people being interviewed once, twice, three times, etc., to see the shape of the samples at each response rate.


Even if everyone cooperates, random chance alone would result in about 13 percent of respondents being interviewed at least twice.  When the response rate falls to 10 percent, most respondents are being interviewed three or four times, with fifteen percent of them being interviewed five times or more.  Even with a 20 percent response rate, about double what Pew reports, a majority will have been interviewed at least twice.

Certainly someone willing to be interviewed three, four, five times or more about a campaign must have a higher level of interest in politics than the typical Iowa caucus-goer who never cooperates with pollsters.  That difference could distort the figures for candidate preferences if some candidates’ supporters are more willing to take part in polls.

Basing polls on a relatively small number of cooperative respondents might also create false stability in the readings over time.  Polling numbers reflect more the opinions of the “insiders” with a strong interest in the campaign and may be less sensitive to any winds of change.  We might also imagine that, as the campaign winds down and most everyone eligible has been solicited by a pollster, samples become more and more limited to the most interested.

Overarching all these findings remains the sobering fact that only about one-in-ten citizens is willing to take part in polling.  Pollsters can adjust after the fact for any misalignments of the sample’s demographics, but they cannot adjust for the fact that people who participate in polling may simply not represent the opinions of most Americans.  We’ll see how well the opinions of those small numbers of respondents in Iowa and New Hampshire match the opinions of those states’ actual electorates on Primary Day.


1For comparison, Pollster archives 65 polls for the 2012 Iowa Republican caucuses totalling 36,300 interviews.  The expanded demand for polls has increased their number by 45 percent and increased the number of interviews conducted by 22 percent in just one Presidential cycle. (To afford polling at greater frequencies, the average sample size has fallen from 558 in 2012 to 473 in 2016.)


Technical Appendix: Comparing Trump and Sanders


The results above come from the 145 national Republican primary polls as archived by Huffington Post Pollster whose fieldwork was completed after June 30, 2015, and on or before January 6, 2016.  I started with July polling since the current frontrunner, Donald Trump, only announced his candidacy on June 16th. For Bernie Sanders I used the 155 national polls of Democrats starting after April 30th, the day Sanders made his announcement.

The models I am using are fundamentally similar to those I presented for the 2012 Presidential election polls and include these three factors:

  • a time trend variable measured as the number of days since June 30, 2015;
  • a set of “dummy” variables corresponding to the universe of people included in the sample — all adults, registered voters, and “likely” voters as determined by the polling organization using screening questions; and,
  • a set of dummy variables representing the method of polling used — “live” interviews conducted over the phone, automated interviews conducted over the phone, and Internet polling.

Trump’s support is best fit by a “fourth-order polynomial” with a quick uptick in the summer, a plateau in the fall, and a new surge starting around Thanksgiving that levelled off at the turn of the year. Support for Sanders follows a “quadratic” time trend.  His support has grown continuously over the campaign but at an ever slower rate.

Of more interest to students of polling are the effects by interviewing method and sampled universe.  Trump does over four percent worse in polls where interviews are conducted by a live human being.  Sanders does worse in polls that use automated telephone methods.  The result for Trump may reflect an unwillingness on the part of his supporters to admit to preferring the controversial mogul when talking with an actual human interviewer.

Sanders does not suffer from this problem, but polls relying on automated telephone methods show results about four percent lower than those conducted by human interviewers or over the Internet (the excluded category represented by the constant).  Since we know that Sanders draws more support from younger citizens, the result for automated polling may represent their greater reliance on cell phones which cannot by law be called by robots. This result contradicts other studies by organizations like Pew that find only limited differences between polls of cell phone users and those of landline users. Nevertheless when it comes to support for Bernie Sanders, polls that rely exclusively on landlines appear to underestimate his levels of support.

Turning to the differences in sampling frames, we find that polls that screen for “likely” voters show greater levels of support for Bernie Sanders than do polls that include all registered voters or all adults.  Trump’s support shows no relationship with the universe of voters being surveyed.  Both candidates, as “insurgents,” are thought to suffer from the problem of recruiting new, inexperienced voters who might not actually show up at the polls for primaries and caucuses.  That seems not to be an issue for either man, and in Sanders’s case it appears that the enthusiasm we have seen among his supporters may well gain him a couple of percentage points when actual voting takes place.

Finally it is clear that Trump’s polling support shows much more variability around his trend line than does Sanders’s. The trend and polling methods variables account for about 59 percent of the variation in Trump’s figures, but fully 72 percent of the variation for Sanders.

A Tale of Two Candidacies

Trump fares better in polls conducted by robots; Sanders polls better when humans conduct the interview.  Sanders also shows greater strength in polls of “likely” voters.

Commentators often treat Donald Trump and Bernie Sanders as representing two “insurgencies” within the Republican and Democratic Parties.  While there are certainly some surface similarities between the two candidacies, national polling data for the two candidates show substantial differences as well.  I begin with two charts comparing the trends in their national polling support using data from Huffington Post Pollster since each candidate announced he was running for President of the United States.



While both men’s support has continued to grow over the course of the campaign, the trajectories of their support are radically different.  Sanders’ polling figures have increased consistently over the course of the campaign, but the rate at which his support has risen has slowed as the campaign progressed.  Trump’s support seems to have gone through three phases — rapid growth at the outset, a plateau over the fall, and a second surge beginning around Thanksgiving that slowed at the turn of the year.  Extrapolating out to February 1st when the Iowa Caucuses take place, Trump would be approaching just under forty-five percent in national polls with Sanders  reaching thirty-five percent.


I have examined two types of polling effects: the method of interviewing and the type of sample drawn. Trump does over four percent worse when interviews are conducted by a live human being.  Sanders does worse by an essentially identical margin in polls that use automated telephone methods.  The result for Trump may reflect an unwillingness on the part of his supporters to admit to preferring the mogul when talking with an actual human interviewer.

Sanders’ poorer showing in polls that rely on automated polling may have to do with their exclusion of cell phones which cannot by law be called by robots. Usually this problem is adjusted for after the poll has been conducted by weighting the data to conform to expected demographic breakdowns.  However Sanders large lead among younger voters who are much less likely to have a landline phone may be suppressing his support in automated polling.  In a recent FoxNews poll, for instance, Sanders holds a 61-31 lead over Hillary Clinton among respondents under 45 years of age; older voters strongly prefer Clinton 71-21.  That same demographic explanation does not work for Trump, however, since he drew an identical 35 percent among voters in that same poll from both age groups.  The “social desirability” explanation probably has greater force when accounting for his poorer showing in polls conducted by human interviewers.

Turning to the differences in sampling frames, we find that polls that screen for “likely” voters show surprisingly greater levels of support for Bernie Sanders than do polls that include all registered voters or all adults.  Trump’s support shows no relationship with the sample of voters drawn.  Both candidates, as “insurgents,” are thought to suffer from the problem of recruiting new, inexperienced voters who might not actually show up at the polls for primaries and caucuses.  That seems not to be an issue for either man, and in Sanders’ case it appears that the enthusiasm we have seen among his supporters may well gain him a couple of percentage points when actual voting takes place.

Many Republicans and Independents See the Benghazi Committee as Politically Motivated and Approve

In today’s New York Times Charles Blow cites a finding from a recent CNN/ORC poll where 47 percent of Republican respondents agreed that the House Select Committee on Benghazi was “using the investigation to gain political advantage.”  At face value this is a rather surprising result.

Most questions that ask people to approve or disapprove of the actions of politicians generate partisan results.  Democrats are more likely to approve of the performance of President Obama while Republicans generally disapprove.  So, at first glance, for half of all Republicans to agree that the Republican-controlled Committee acted for political gain could seem unusually critical of the Committee’s actions. As it turns out there is a much larger group of Republicans who see the proceedings as politically motivated and are cheering the Committee on.

It turns out that the question Blow cites was asked of only half the sample.  Another half were asked whether “Republicans have gone too far” in the way they have handled the hearings, or whether they have handled them “appropriately.” The left-hand table reports that 71 percent of Democrats said “Republicans” had “gone too far” while 20 percent believed “Republicans” had handled the hearings “appropriately.”  For Republican respondents the reverse held true; only 16 percent of them say “Republicans have gone too far” while 74 percent say Republicans acted “appropriately.”

These figures do not sum to one hundred percent because of “don’t know” responses.  Nine percent of Democrats (=100-(71+20) = 9) have no answer on the “gone too far” question as do ten percent of Republicans (= 100 – (16+74) = 10).

The question on the left constitutes a referendum on “Republicans” while the one on the right asks about the “House Select Committee” with no partisanship attached.  When asked to judge the Republicans’ behavior, we see the usual pattern of partisan response. However when asked to judge whether the Committee conducted an “objective investigation” or one to “gain political advantage,” the difference between Republicans and Democrats is considerably smaller. Republicans split about equally between the two alternatives, with 47 percent choosing the “objective” response and 49 percent the “political” one. Democrats almost uniformly see a political motive behind the Committee’s actions. 85 percent of them choose the political answer while just 10 percent see the Committee as “objective.”

Since these sub-samples were randomly chosen from the overall pool of respondents, both represent equally valid samplings of public opinion.  One thing we do not have are the answers of citizens when asked both questions because they are in separate half-samples. We can, however, run some experiments to see what proportion of Republicans think the Committee is conducting a political investigation and approve of it.

We start with the basics — half of Republicans believe the Committee’s actions are politically motivated, and three-quarters of them approve of the its conduct of the investigation.  We can combine these two measures to estimate how many Republicans endorse the Committee’s following a political agenda.


This table uses the responses for Republicans from the first table.  Republicans’ answers to whether the Committee was objective or political appear on the columns and how they judged the Committee’s actions along the rows.  We know the percentage of Republicans who gave each of these answers, but we do not have data for the cells of the table because no one was asked both questions.  We can generate a “baseline” estimate for these cells by assuming that there is no relationship between answers to one question and answers to the other.  Under that assumption we get a most-likely estimate of the proportion of Republicans endorsing a politically-motivated Committee of about 36 percent.  That figure is calculated by taking 49 percent, the proportion seeing the Committee as politically motivated, and applying it to the 74 percent of Republicans who thought the Committee’s actions were “appropriate.”  Multiplying those figures together yields the estimated proportion of Republicans holding both opinions,  49% x 74% = 36.3%.

That figure represents our best guess since it makes no assumptions about how opinions on the two questions might be related. However we can also set upper and lower bounds for this value because it is constrained by the “marginals,” the row and column totals that each must sum to one hundred percent.  The minimum, or “benign” estimate assumes every Republican who thinks the Committee has “gone too far” also believes the Committee is acting politically. That produces a table like this:


In this extreme case the 7.5 percent in the original “objective/gone too far” cell is added to the corresponding “political” cell on its right.  Since the “political” column must still sum to 49 percent, the proportion of Republicans who think the Committee’s action appropriate must fall to compensate and reaches its minimum of just under 29 percent.

Likewise we can add the 7.8 percent in the original “gone too far/political” cell to the “objective” cell on its left.  That more “aggressive” model assumes all Republicans who see the Committee acting politically also endorse its actions, and none think it has gone too far.  That increases the estimate to its maximum of 44 percent.


All told then, between 29 and 44 percent of Republicans see the House Select Committee on Benghazi as acting politically and approve.

Charles Blow views the 49 percent of Republicans who believe the Committee is politically motivated as showing widespread “skepticism” about the Committee’s motives that extends even to Republicans.  With three-quarters of Republican endorsing the Committee’s investigation, I see more cheering than skepticism in Republican ranks.  The true skeptics, those who think the Committee has “gone too far,” make up just sixteen percent of Republicans.  Twice as many Republicans or more endorse the Committee’s actions precisely because it has pursued a political agenda.

Surprisingly,  independents prove even more likely to see the approve of a Committee with partisan motivations..  Three-quarters of independents think the Committee see a political motive, but a majority of them, 57 percent, also think the Committee has acted appropriately. Applying the baseline assumption of no-correlation as before, and multiplying those two figures together, indicates that nearly 43 percent of independents endorse a politically-motivated investigation, higher even than the Republican figure of 36 percent.

Elsewhere in the CNN/ORC poll we see that independents are vastly more unhappy with Hillary Clinton’s handling of the Benghazi affair than are Democrats.  Twice as many independents report being “dissatisfied” than “satisfied,” 65 percent to 31 percent.   Democrats hold the reverse set of opinions with 63 percent satisfied and 30 percent dissatisfied.  The Republicans are the most extreme, of course, with 85 percent dissatisfied and only eleven percent satisfied. Those dissatisfied independents could play an important role in next fall’s general election.