Technical Appendix for Gerrymandering and Proportionality

These regressions measure the relationship between the percent of seats awarded to Democrats as a function of the percent of votes that party won. The “national-level” figures represent a regression using election-years as the unit of analysis; data spans 1940 through 2018 or forty observations. The “state-level” estimates come from regression using state-years as the unit of analysis. There are 679 qualifying races.  See this article for details on how states and elections were selected. State-years where one-party won all the seats are excluded.

Retirements as a Bellwether for House Elections

There have been eleven midterm elections when House retirements by one party outnumbered those of the other party by six or more seats.  In all but one election the party with the greater number of retirements lost seats.


In the months before the 2018 election forty Republican House Members chose to give up their seats rather than pursue re-election, by far the greatest Republican exodus since the New Deal. The previous Republican record of twenty-seven was set in 1958 during the Eisenhower recession. Democrats once saw forty-one of their Members choose to depart the House in 1992 when Clinton was first elected.

However it is not the volume of a party’s retirements that matter as much as the excess of retirements from one side of the aisle or the other.  To be sure, Members of Congress retire for many reasons. Age and illness catch up with the best of us.  Some Members give up their House seats to seek higher office like Kirsten Sinema and Beto O’Rourke did this year.

Still, Members also pay close attention to the winds of politics for fear they might be swept out of their seats. Some choose to retire rather than face an embarrassing defeat in the next election.  Such “strategic retirements” might prove a plausible bellwether for future elections.  If many more Members of one party are leaving their seats than the other, that might bode ill for the party’s results at the next election.

One thing is certain, retirements prove useless for predicting House results in Presidential election years.  Presidential politics overwhelms any effect we might see for strategic retirements in House elections.

The picture looks different in midterm elections.  Years that saw more Republicans retiring compared to Democrats were also years where more seats swung from Republican to Democratic hands.  This past election joins 1958 as years when an excess of Republican departures from the House foretold a substantial loss of seats at the next election.

The horizontal axis measures the difference between the number of Republican Members who left the House before an election and the number of Democrats who gave up their seats.* The vertical axis shows the swing in House seats compared to the past election. For instance, in 2018 forty Republicans and eighteen Democrats left the House, for a net retirements figure of +22 Republican. The “blue wave” swung forty seats from the Republicans to the Democrats, about nine fewer than the best-fit line would predict.

Some readers might ask whether that nine-seat deficit reflected Republican gerrymandering in the years since the 2010 Census.  I simply cannot say.  The likely error range (the “95% confidence interval”) around the prediction for any individual year averages about a hundred seats.** With that much variability, detecting things like gerrymandering effects is simply impossible.

As a bellwether, then, retirements seem pretty useless.  They appear to have so much intrinsic variability that any effects of strategic decision-making by Members remain hidden.  Suppose we group elections by the difference in retirements.  Will we see any stronger relationship with the election result than we have so far?

In the six elections where the number of retiring Republicans outnumbered retiring Democrats by six or more Members, the Republicans lost seats in five or them.  The same held true for elections when six of more Democrats retired compared to their Republican colleagues.  The Republicans gained seats in all five of those elections.

So retirements can prove a useful predictor of future election results if we limit our attention to the more extreme years where one party’s retirements outnumber the other by six or more.  The party with the excess of retirements has lost ten of the eleven elections fought in such circumstances.

 

 

*The data on Congressional retirements come from the Brookings Institution’s invaluable Vital Statistics on Congress.  The figure for 2018 come from the New York Times.

**The height of the bars depends on the overall “standard error of estimate,” in this case 23.8 seats, the size of the sample (21 elections), and the difference between the number of retirements in a given year and the mean for all years.  The confidence intervals average about plus or minus fifty seats for any given election.

How Well Do Generic-Ballot Polls Predict House Elections?

I have compiled the results of generic-ballot polls taken near to an election and compared them to the actual division of the Congressional vote.  The table below presents the margin between support for the President’s party and support for the opposition.  For each election I have used about half-a-dozen polls from different agencies taken just before voting day.  Averaging the differences between these two quantities shows that these polls have fared rather well since 2002.  The average deviation in the four midterm elections is 0.6 percent; in Presidential years that falls to 0.1 percent.

Still these averages hide some fairly wide fluctuations.  In four of the eight elections the difference between the polls and the election results exceeds two percent.  The error was especially egregious in 2006 when the polls predicted nearly a fourteen-point Democratic margin compared to 8.6 percent in the election itself.

In the most recent election, 2016, the polls predicted a slight positive swing in favor of the Democrats, but the outcome went slightly in the opposite direction.  All the cases where the polls erred in picking the winner occurred in Presidential years and usually when the polling margin was close.  The four polls taken during midterm years all predicted the correct winner, though the size of the victory was off by more than three points in two of those elections.

 

The Strange Case of 1976

1976 was a horrible year for Senate Republicans; adjusting for that fact makes a slight difference to my 2018 predictions.

Re-examining the results for my original model of Senate elections, it was hard to ignore how poorly the model fit the data for 1976.  Here is a graph of the model’s predicted vote for Senate and the actual vote that shows what an “outlier” 1976 is.  While Truman rallied Senate Democrats in 1948, even that event just hovers on the edge of the statistical “margin of error.”  The Republicans’ failure in the 1976 election after Richard Nixon was forced from office stands truly alone compared to the rest of the postwar elections in my dataset.

If 1976 had been a normal presidential election year, the Republicans’ Senatorial prospects would have looked fairly rosy.  Gerald Ford was running for re-election, real personal income was growing at two percent, and the Democrats were defending seats won in 1970 by a (two-party) margin of 56-44 at the height of the anti-war and anti-Nixon fervor.  That generally pro-Republican climate predicts the GOP should have won nearly 52 percent of the popular vote for Senate.

But, of course, the 1976 election was anything but normal.  It was the first presidential election after the Watergate scandals had forced Richard Nixon from office in disgrace.  Rather than winning the popular vote by the predicted four-point margin, the Republicans could muster only the same share of the vote they won back in 1970, 44 percent.  Though a number of seats changed hands, at the end of the day the Democrats held the same 61-seat Senate majority they did before the 1976 election.

I can adjust statistically for the anomalous 1976 election by adding a “dummy” variable to my model that is one in 1976 and zero otherwise.  Adjusting for 1976 radically improves all aspects of my model.  Its predictive power as measured by adjusted R-squared rises from 0.43 to 0.56, and all the coefficients are more precisely estimated.

Adding this dummy variable implicitly treats Gerald Ford as different from other Presidents running for re-election.  Ford was apparently so compromised by Watergate that his presence at the top of the ticket did not generate the kind of support his fellow Republican candidates for Senate might have expected.  With the 1976 adjustment, the overall effect of Presidential candidacies rises from 2.5 to 3.1 percent, suggesting Ford’s performance was suppressing the estimate for other Presidencies.

Adjusting for 1976 also increases the compensating effect (“regression-toward-the-mean”) of the prior vote for a Senatorial class.  With the suppressing effect of 1976 removed, I now estimate that the Democrats’ lopsided Senate victory in 2012 should be worth about 2.3 percent to the Republicans this November, compared to the 1.9 percent figure I presented earlier with no adjustment for 1976.

For comparison to the chart above, here are the predicted and actual values for the model adjusted for 1976:

Including a dummy variable for 1976 sets its residual to zero and places its predicted value precisely on the line.  The largest positive outliers are now 1978 and two Presidential years, Truman in 1948 and Barack Obama in 2016.

The effects of this modification on the predictions in my earlier article are quite modest.  Without adjusting for 1976, I predicted the Republicans will win 48.1 percent of the popular vote if Trump’s approval rating is at forty and real disposable personal income rises two percent.  With the adjustment that figure rises to 48.4 percent.

The overall conclusion remains that no likely combination of factors predicts that the Republicans will win a majority of the popular vote for Senate this fall.

 

 

Technical Appendix: Trends in Trump’s Job Approval

I used a relatively simple model to generate the graph shown in the accompanying article.  I ran weighted least-squares regressions against the logarithm of the number of days since the Inauguration and the usual array of dummy variables to account for differences in polling methods and “house effects.”  Here are the results:

 

 

 

 

 

 

 

 

 

 

 

 

The use of the base 10 logarithm shows that Trump’s approval is estimated to have fallen by a bit over five percent between day ten of his term (log(10)=1) and day one hundred (log(100)=2).  I use the percent approval rating in this regression to make the dummy variable coefficients more meaningful.  A regression using the logit of the approval rating shows identical results.

These results show that polls conducted over the Internet or limited to registered voters tend to report a job approval figure about 1.5 percent higher than the consensus.  The “house effects” variables indicate that polls conducted by YouGov/Economist average just under one percentage point less favorable toward Trump than the consensus.  Job approval ratings from polls taken by SurveyMonkey, Politico/Morning Consult, and Rasmussen run from two to five percent higher than the consensus.  Gallup, the most frequent pollster, does not deviate from the consensus.

Technical Appendix: Seats and Votes in the 2018 Election

I am extending the simple model I presented in 2012 relating the proportion of House seats won by Democrats against that party’s share of the (two-party) national popular vote for Congressional candidates.  It uses dummy variables to represent each redistricting period (e.g., the 2000 Census was used to redistrict elections from 2002-2010), and a slope change that starts with the Republican House victory of 1994.

To review, the earlier model showed this pattern of partisan advantage for elections conducted since 1940:

The results for the 2010 redistricting were based solely on the 2012 election.  As we’ll see in a moment, adding in 2014 and 2016 only made that result more robust.

As I argued earlier, not all of this trend results from partisan gerrymandering.  Americans have sorted themselves geographically over the past half-century with Democrats representing seats from urban areas and Republicans holding seats from suburban and rural areas.  As partisans self-segregate, the number of “safe” seats rises, and electoral competitiveness declines.

Partisan self-segregation also makes gerrymandering easier.  Opponents can be “packed” into districts where they make up a super-majority.  House Minority Leader Nancy Pelosi routinely wins 80 percent or more of the voters in her tiny, but densely populated San Francisco district.  Many of these seats are held by minority Members of Congress because of our national policy of encouraging “majority-minority” districts.   These efforts were well-motivated as a response to racist gerrymandering that would “crack” minority areas and distribute pieces of them in a number of majority-white districts.  Unfortunately for the Democrats these policies have meant that too many of the party’s voters live in heavily-Democratic districts.

Here is the result of an ordinary least squares regression for the share of House seats won by the Democrats in elections since 1940:

If I plot the predicted and actual values for Democratic seats won, the model unsurprisingly follows the historical pattern quite closely:

The Democrats routinely won around sixty percent of House seats between 1940 and 1992.  Since then they have only held a majority in the House twice, in 2006 and 2008.  Notice, too, that both the actual and predicted values for 1994 to the present show much less variance than the earlier decades.  The results above show that the “swing ratio” relating seats and votes has become much smaller falling from 1.92 before 1994 to 1.33 (=-0.59+1.92) since.  A smaller swing ratio indicates that House elections have become less competitive since Bill Clinton was elected President in 1992.  Changes in vote shares are still amplified in seat outcomes, as they are in all first-past-the-post electoral systems like ours, but the effect has been diminished because of the increase in the number of safe seats on both sides of the aisle.

We can use this model to estimate the share of votes required in order for the Democrats to win a majority in the House.  This chart shows the predicted relationships between seats and votes for two historical periods, one through the election of Bill Clinton in 1992, and the other beginning with the Republican victory in the House election of 1994 under Newt Gingrich and his “Contract with America.”

The slope in the latter period is substantially flatter than in the earlier period, meaning that Congressional elections have become somewhat less competitive since 1992.  Changes in vote shares have a smaller effect on changes in seat shares than they did before 1994.

Finally, the third line represents an estimate for the relationship in 2018, using the 1994-2016 slope and only the post-2010 intercept shift.  The chart shows that for the Democrats to win half the seats in 2018 they will need to garner a bit over 53 percent of the two-party popular vote for the House.

 

 

*The intercepts in these charts represent weighted averages of the adjustments for the various Census years. For instance, the 1994-2016 line includes the coefficients for the 1990, 2000, and 2010 Census weighted by the number of elections in each decade. So in this case the 1990 and 2000 adjustments would have weights of five, and the 2010 adjustment a weight of three. The 2018 line applies only the 2010 redistricting adjustment.

Procedures Used with Data from Huffington Post Pollster

In the past few weeks, Pollster has begun reporting multiple results for a single poll.  Some polling organizations have been reporting separate results for Democratic, Republican, and independent respondents, as well as the aggregated data for all respondents.  They have also begun providing detailed information on the question(s) asked to determine voting intention.  Pollster reports separate results for each question wording.

Since all my analyses use just one entry per poll, I have begun removing this extra data before analysis.  Unless specifically stated, I am using only the first “question iteration” for each poll (coded “1” at Pollster) and only data for the entire population.  Using the first iteration helps insure consistency across all the polls from a single organization.

 

Why Weighted Least Squares for Polling Regressions

Standard ordinary least squares regression assumes that the error term has the same variance across all the observations.  When the units are polls, we know immediately that this assumption will be violated.  The error in a poll in inversely proportional to its sample size.  The “margin of error” that pollsters routinely report is twice the standard error of estimate evaluated at 50%, the worst case with the largest possible variance.  That comes from the well-known statistical formula

SE(p) = sqrt(p[1-p]/N)

where N is the sample size.  This formula reaches its maximum at p=0.5 (50%) making the standard error 0.5/sqrt(N).

Weighted least squares adjusts for these situations where the error term has a non-constant variance (technically called “heteroskedasticity”). To even out the variance across observations, each one is weighted by the reciprocal of its estimated standard error.  For polling data, then, the weights should be proportional to the reciprocal of 1/sqrt(N), or just sqrt(N) itself. I thus weight each observation by the square root of its sample size.

More intuitively we are weighting polls based on their sample sizes.  However, because we are first taking the square roots of the sample sizes, the weights grow more slowly as samples increase in size, just as does the accuracy of prediction.

Iowa: So Many Polls. So Few Respondents.

Pollsters have conducted over 44,000 interviews among Iowa’s 160,000 Republicans, but they probably interviewed just 15,000 unique people.  A majority of those polled took part in at least three interviews over the course of the campaign.

It seems like new Presidential polling figures are released every day.  We generally talk about each new poll as a unique snapshot of the campaign with some fresh sample of a few hundred caucus-goers.  That concept of polls might apply to national samples, but when polling in states as small as Iowa and New Hampshire, the number of eligible respondents is not that large compared to the number of interviews being conducted.

Making the problem worse is the falling “response rate” in polling, the proportion of eligible respondents who complete an interview.  Mobile phones, caller-ID, answering machines, all have enabled potential respondents to avoid the pollster’s call.  Pew reports that response rates have fallen from 21 percent to 9 percent just between 2006 and 2012.  If we assume a response rate of ten percent, only some 16,000 of Iowa’s 160,000 eligible Republican caucus-goers might have agreed to take part in a poll.

Huffington Post Pollster lists a total of 94 polls of Republican caucus-goers through today, January 31, 2016, constituting a total of 44,433 interviews.  I will use this figure to see how the composition of the sample changes with different response rates.1

How large is the electorate being sampled?

Around 120,000 people participated in the Republican caucuses in 2008 and 2012.  While some observers think turnout in 2016 could be higher because of the influx of voters for Donald Trump, I have stuck with the historical trend and estimate Republican turnout in 2016 at just under 124,000 citizens.

To that baseline we have to add in people who agree to complete an interview but do not actually turn out for the caucuses.  In my 2012 analysis I added 20 percent to the estimated universe to account for these people, but recent findings from Pew suggest 30 percent inflation might be more accurate.  With rounding, I will thus use 160,000 as my estimate for the number of Iowa Republicans who might have been eligible to be polled about the 2016 Republican caucuses.

How low is the response rate?

Most of those 160,000 people will never take part in a poll.  Pew estimated 2012 “response rates,” the proportion of eligible respondents who actually complete an interview, in the neighborhood of 9 percent.  To see what this means for Iowa, here is a table that presents the average number of interviews a cooperating respondent would have conducted during the 2016 campaign at different response rates.  At a ten percent response rate like Pew reports, the 16,000 cooperating prospects would each need to complete an average of 2.78 interviews to reach the total of 44,433.

table-estiimated-respondents-iowa-rep

How many people gave how many interviews?

Finally, I’ll apply the Poisson distribution once again to estimate the number of people being interviewed once, twice, three times, etc., to see the shape of the samples at each response rate.

iowa-rep-sample-rates2

Even if everyone cooperates, random chance alone would result in about 13 percent of respondents being interviewed at least twice.  When the response rate falls to 10 percent, most respondents are being interviewed three or four times, with fifteen percent of them being interviewed five times or more.  Even with a 20 percent response rate, about double what Pew reports, a majority will have been interviewed at least twice.

Certainly someone willing to be interviewed three, four, five times or more about a campaign must have a higher level of interest in politics than the typical Iowa caucus-goer who never cooperates with pollsters.  That difference could distort the figures for candidate preferences if some candidates’ supporters are more willing to take part in polls.

Basing polls on a relatively small number of cooperative respondents might also create false stability in the readings over time.  Polling numbers reflect more the opinions of the “insiders” with a strong interest in the campaign and may be less sensitive to any winds of change.  We might also imagine that, as the campaign winds down and most everyone eligible has been solicited by a pollster, samples become more and more limited to the most interested.

Overarching all these findings remains the sobering fact that only about one-in-ten citizens is willing to take part in polling.  Pollsters can adjust after the fact for any misalignments of the sample’s demographics, but they cannot adjust for the fact that people who participate in polling may simply not represent the opinions of most Americans.  We’ll see how well the opinions of those small numbers of respondents in Iowa and New Hampshire match the opinions of those states’ actual electorates on Primary Day.

 


1For comparison, Pollster archives 65 polls for the 2012 Iowa Republican caucuses totalling 36,300 interviews.  The expanded demand for polls has increased their number by 45 percent and increased the number of interviews conducted by 22 percent in just one Presidential cycle. (To afford polling at greater frequencies, the average sample size has fallen from 558 in 2012 to 473 in 2016.)