A Tale of Two Candidacies

Trump fares better in polls conducted by robots; Sanders polls better when humans conduct the interview.  Sanders also shows greater strength in polls of “likely” voters.

Commentators often treat Donald Trump and Bernie Sanders as representing two “insurgencies” within the Republican and Democratic Parties.  While there are certainly some surface similarities between the two candidacies, national polling data for the two candidates show substantial differences as well.  I begin with two charts comparing the trends in their national polling support using data from Huffington Post Pollster since each candidate announced he was running for President of the United States.



While both men’s support has continued to grow over the course of the campaign, the trajectories of their support are radically different.  Sanders’ polling figures have increased consistently over the course of the campaign, but the rate at which his support has risen has slowed as the campaign progressed.  Trump’s support seems to have gone through three phases — rapid growth at the outset, a plateau over the fall, and a second surge beginning around Thanksgiving that slowed at the turn of the year.  Extrapolating out to February 1st when the Iowa Caucuses take place, Trump would be approaching just under forty-five percent in national polls with Sanders  reaching thirty-five percent.


I have examined two types of polling effects: the method of interviewing and the type of sample drawn. Trump does over four percent worse when interviews are conducted by a live human being.  Sanders does worse by an essentially identical margin in polls that use automated telephone methods.  The result for Trump may reflect an unwillingness on the part of his supporters to admit to preferring the mogul when talking with an actual human interviewer.

Sanders’ poorer showing in polls that rely on automated polling may have to do with their exclusion of cell phones which cannot by law be called by robots. Usually this problem is adjusted for after the poll has been conducted by weighting the data to conform to expected demographic breakdowns.  However Sanders large lead among younger voters who are much less likely to have a landline phone may be suppressing his support in automated polling.  In a recent FoxNews poll, for instance, Sanders holds a 61-31 lead over Hillary Clinton among respondents under 45 years of age; older voters strongly prefer Clinton 71-21.  That same demographic explanation does not work for Trump, however, since he drew an identical 35 percent among voters in that same poll from both age groups.  The “social desirability” explanation probably has greater force when accounting for his poorer showing in polls conducted by human interviewers.

Turning to the differences in sampling frames, we find that polls that screen for “likely” voters show surprisingly greater levels of support for Bernie Sanders than do polls that include all registered voters or all adults.  Trump’s support shows no relationship with the sample of voters drawn.  Both candidates, as “insurgents,” are thought to suffer from the problem of recruiting new, inexperienced voters who might not actually show up at the polls for primaries and caucuses.  That seems not to be an issue for either man, and in Sanders’ case it appears that the enthusiasm we have seen among his supporters may well gain him a couple of percentage points when actual voting takes place.

The Kentucky Gubernatorial Election

Political observers were shocked when the Republican candidate for governor in Kentucky, Matt Bevin, won a resounding victory over Democrat Jack Conway. Conway, the incumbent Attorney General, had held a small lead in the polls throughout the campaign, about 2.1 percent according to the Huffington Post’s Pollster charts.  That difference translated into an 88 percent chance that Conway was actually ahead in the electorate as a whole.

The actual results were not even close.  Bevin won with 52.5 percent of the ballots cast compared to 43.8 percent for Conway and 3.7 percent for third-party candidate Drew Curtis. Bevin received nearly 218,000 more votes than the Republican candidate in 2011, David Williams, while Conway lost ground winning some 37,000 votes fewer than the total won by the outgoing Democratic governor, Steve Beshear, when he ran in 2011.  In the two largest counties, Jefferson and Fayette, where the Louisville and Lexington metropolitan areas are located, Conway actually won about 10,000 fewer votes in 2015 than he had received when running for Attorney General in 2011.

In 53 of the 120 counties in Kentucky, Bevin’s share of the vote increased by more than twenty percent compared to that won by Williams four years ago.

Following my earlier report on the 2013 Virginia gubernatorial race, here is a simple model based on county-by-county data predicting the vote for Bevin as a function of the vote for Williams in 2011, the proportion of black and Hispanic adults in 2011, and the change in turnout measured as a proportion of the total population aged 19 and older.  (As usual, I have transformed these proportions into their “logits.” )  The voting data comes from the official returns on the site run by the Kentucky Secretary of State.  The demographic data were compiled from Census estimates reported here.  Because of the way the Census groups people by age, my definition of “adults” excludes 18 year-olds.

Ordinary Least Squares; 120 Kentucky Counties
Dependent variable: Republican Vote for Governor, 2015
All variables measured as "logits."

                 coefficient   std. error   t-ratio   p-value 
  const            0.480953     0.116231      4.138    6.70e-05 ***
  Rep Gov 2011     0.646872     0.0354867    18.23     1.04e-35 ***
  Black 2014      −0.0545183    0.0206335    −2.642    0.0094   ***
  Hispanic 2014    0.0132545    0.0328316     0.4037   0.6872  
  Turnout Change   0.322859     0.136171      2.371    0.0194   **

Mean dependent var   0.341255   S.D. dependent var   0.388213
Sum squared resid    3.832535   S.E. of regression   0.182555
R-squared            0.786303   Adjusted R-squared   0.778870

Bevin did less well in counties with higher proportions of blacks, though not Hispanics, even after controlling for the 2011 Republican vote.  He also apparently fared better in counties where turnout increased.

However the turnout effect largely depends on three “outliers,” Cumberland, Elliott, and Menifee Counties, all with adult populations under 7,000. In the first two of these, turnout fell by more than fifteen percent, while in Menifee it rose by about the same amount.  If we exclude these three counties, the effects of turnout change are much more modest:

Ordinary Least Squares; 117 Kentucky Counties
Dependent variable: Republican Vote for Governor, 2015
All variables measured as "logits."

                 coefficient   std. error   t-ratio   p-value 
  const            0.455381     0.118921      3.829    0.0002   ***
  Rep Gov 2011     0.682421     0.0379599    17.98     8.51e-35 ***
  Black 2014      −0.0486127    0.0205614    −2.364    0.0198   **
  Hispanic 2014   −0.00571780   0.0342821    −0.1668   0.8678  
  Turnout Change   0.147086     0.184871      0.7956   0.4279  

Mean dependent var   0.344539   S.D. dependent var   0.383348
Sum squared resid    3.617365   S.E. of regression   0.179716
R-squared            0.787798   Adjusted R-squared   0.780220

The effects for the prior Republican vote and the proportion black and Hispanic remain about the same after these three counties are excluded, but the effect for changes in turnout is about half its prior value and fails to achieve statistical significance.

The Kynect Effect

One of the major issues in the campaign was the “Kynect” program, Kentucky’s implementation of the exchanges provided for under the Affordable Care Act.  Bevin opposed Kynect and first threatened to abolish the program entirely if elected.  He has since relented somewhat agreeing to grandfather all current enrollees but not accept any new applications.  We might thus expect that counties with higher Kynect enrollment rates might show lower levels of support for the Republican.  Using 2014 enrollment data from the Kentucky governor’s site, I find no effect for Kynect enrollment when measured as a proportion of a county’s total population.  When added to the model above, the coefficient is trivially small (-0.014) and statistically insignificant.

It turns out, though, that if we look at the factors influencing Kynect enrollments, we get what might be considered a counter-intuitive result:

Ordinary Least Squares: 120 Kentucky Counties
Dependent variable: Proportion of Total Population Enrolled in Kynect
All variables measured as "logits."

             coefficient   std. error   t-ratio   p-value 
const         −2.88070      0.205054     −14.05    8.61e-27 *** 
Rep Gov 2011   0.147432     0.0639254      2.306   0.0229   ** 
Black 2014    −0.0979391    0.0366472     −2.672   0.0086   *** 
Hispanic 2014 −0.154206     0.0587630     −2.624   0.0099   *** 

Mean dependent var  −1.955458   S.D. dependent var   0.388445 
Sum squared resid    12.62110   S.E. of regression   0.329852 
R-squared            0.297104   Adjusted R-squared   0.278926

Kynect enrollments are higher in counties that voted Republican in 2011 and lower in counties with larger proportions of black or Hispanic citizens.

One possible theory might be that because the ACA was designed to provide insurance to less well-off Americans not already covered by programs like Medicaid, Kynect rates should be higher in counties where Medicaid rates are relatively lower.  This is certainly false.  The bivariate correlation between 2014 Kynect enrollment rates and 2011 Medicaid enrollment rates is 0.89.  Kynect enrollments are highest in counties where Medicaid enrollments are also higher.  The real determinant of Kynect (and Medicaid) coverage rates is whether a county is urban or rural.  If we use total county population as a rough measure of urbanity, then we have this relationship:

Kynect enrollments are higher in the smaller counties.  Not surprisingly, those more rural counties gave a larger share of their votes to Bevin.


However Bevin fared worst in two largest counties where Lexington and Louisville are located.

Many commentators suggested that Bevin’s success came more from his appeal to social and religious conservatives than anything having to do with economics or programs like Kynect.  Kentucky ranks eighth among the states based on weekly church attendance rates, and Bevin appealed directly to religious conservatives with his strong endorsement of Kim Davis, the local official who refused to issue marriage certificates to homosexual couples after the Supreme Court’s decision in June.  It seems much more plausible that Bevin’s victory was powered more by these religious appeals than by anything having to do with his policy stands.


Many Republicans and Independents See the Benghazi Committee as Politically Motivated and Approve

In today’s New York Times Charles Blow cites a finding from a recent CNN/ORC poll where 47 percent of Republican respondents agreed that the House Select Committee on Benghazi was “using the investigation to gain political advantage.”  At face value this is a rather surprising result.

Most questions that ask people to approve or disapprove of the actions of politicians generate partisan results.  Democrats are more likely to approve of the performance of President Obama while Republicans generally disapprove.  So, at first glance, for half of all Republicans to agree that the Republican-controlled Committee acted for political gain could seem unusually critical of the Committee’s actions. As it turns out there is a much larger group of Republicans who see the proceedings as politically motivated and are cheering the Committee on.

It turns out that the question Blow cites was asked of only half the sample.  Another half were asked whether “Republicans have gone too far” in the way they have handled the hearings, or whether they have handled them “appropriately.” The left-hand table reports that 71 percent of Democrats said “Republicans” had “gone too far” while 20 percent believed “Republicans” had handled the hearings “appropriately.”  For Republican respondents the reverse held true; only 16 percent of them say “Republicans have gone too far” while 74 percent say Republicans acted “appropriately.”

These figures do not sum to one hundred percent because of “don’t know” responses.  Nine percent of Democrats (=100-(71+20) = 9) have no answer on the “gone too far” question as do ten percent of Republicans (= 100 – (16+74) = 10).

The question on the left constitutes a referendum on “Republicans” while the one on the right asks about the “House Select Committee” with no partisanship attached.  When asked to judge the Republicans’ behavior, we see the usual pattern of partisan response. However when asked to judge whether the Committee conducted an “objective investigation” or one to “gain political advantage,” the difference between Republicans and Democrats is considerably smaller. Republicans split about equally between the two alternatives, with 47 percent choosing the “objective” response and 49 percent the “political” one. Democrats almost uniformly see a political motive behind the Committee’s actions. 85 percent of them choose the political answer while just 10 percent see the Committee as “objective.”

Since these sub-samples were randomly chosen from the overall pool of respondents, both represent equally valid samplings of public opinion.  One thing we do not have are the answers of citizens when asked both questions because they are in separate half-samples. We can, however, run some experiments to see what proportion of Republicans think the Committee is conducting a political investigation and approve of it.

We start with the basics — half of Republicans believe the Committee’s actions are politically motivated, and three-quarters of them approve of the its conduct of the investigation.  We can combine these two measures to estimate how many Republicans endorse the Committee’s following a political agenda.


This table uses the responses for Republicans from the first table.  Republicans’ answers to whether the Committee was objective or political appear on the columns and how they judged the Committee’s actions along the rows.  We know the percentage of Republicans who gave each of these answers, but we do not have data for the cells of the table because no one was asked both questions.  We can generate a “baseline” estimate for these cells by assuming that there is no relationship between answers to one question and answers to the other.  Under that assumption we get a most-likely estimate of the proportion of Republicans endorsing a politically-motivated Committee of about 36 percent.  That figure is calculated by taking 49 percent, the proportion seeing the Committee as politically motivated, and applying it to the 74 percent of Republicans who thought the Committee’s actions were “appropriate.”  Multiplying those figures together yields the estimated proportion of Republicans holding both opinions,  49% x 74% = 36.3%.

That figure represents our best guess since it makes no assumptions about how opinions on the two questions might be related. However we can also set upper and lower bounds for this value because it is constrained by the “marginals,” the row and column totals that each must sum to one hundred percent.  The minimum, or “benign” estimate assumes every Republican who thinks the Committee has “gone too far” also believes the Committee is acting politically. That produces a table like this:


In this extreme case the 7.5 percent in the original “objective/gone too far” cell is added to the corresponding “political” cell on its right.  Since the “political” column must still sum to 49 percent, the proportion of Republicans who think the Committee’s action appropriate must fall to compensate and reaches its minimum of just under 29 percent.

Likewise we can add the 7.8 percent in the original “gone too far/political” cell to the “objective” cell on its left.  That more “aggressive” model assumes all Republicans who see the Committee acting politically also endorse its actions, and none think it has gone too far.  That increases the estimate to its maximum of 44 percent.


All told then, between 29 and 44 percent of Republicans see the House Select Committee on Benghazi as acting politically and approve.

Charles Blow views the 49 percent of Republicans who believe the Committee is politically motivated as showing widespread “skepticism” about the Committee’s motives that extends even to Republicans.  With three-quarters of Republican endorsing the Committee’s investigation, I see more cheering than skepticism in Republican ranks.  The true skeptics, those who think the Committee has “gone too far,” make up just sixteen percent of Republicans.  Twice as many Republicans or more endorse the Committee’s actions precisely because it has pursued a political agenda.

Surprisingly,  independents prove even more likely to see the approve of a Committee with partisan motivations..  Three-quarters of independents think the Committee see a political motive, but a majority of them, 57 percent, also think the Committee has acted appropriately. Applying the baseline assumption of no-correlation as before, and multiplying those two figures together, indicates that nearly 43 percent of independents endorse a politically-motivated investigation, higher even than the Republican figure of 36 percent.

Elsewhere in the CNN/ORC poll we see that independents are vastly more unhappy with Hillary Clinton’s handling of the Benghazi affair than are Democrats.  Twice as many independents report being “dissatisfied” than “satisfied,” 65 percent to 31 percent.   Democrats hold the reverse set of opinions with 63 percent satisfied and 30 percent dissatisfied.  The Republicans are the most extreme, of course, with 85 percent dissatisfied and only eleven percent satisfied. Those dissatisfied independents could play an important role in next fall’s general election.

Honey, It’s the Pollster Calling Again!

Back in 1988 I had the pleasure of conducting polls during the New Hampshire primaries on behalf of the Boston Globe.  The Globe had a parochial interest in that year’s Democratic primary because the sitting Massachusetts governor, Michael Dukakis, had become a leading contender for the Presidential nomination.  The Republican side pitted Vice-President George H. W. Bush against Kansas Senator Bob Dole, the upset winner of the Iowa caucuses a week before the primaries. Also in the race were well-known anti-tax crusader Jack Kemp and famous televangelist Pat Robertson.  Bush had actually placed third in Iowa behind both Dole and Robertson.

We had been polling both sides of the New Hampshire primary as early as mid-December of 1987, but after the Iowa caucuses, the pace picked up enormously. Suddenly we were joined by large national polling firms like Gallup and media organizations like the Wall Street Journal and ABC News.  As each day brought a new round of numbers from one or another pollster, we began to ask ourselves whether we were all just reinterviewing the same small group of people.

Pollsters conducting national surveys with samples of all adults or all registered voters never face this problem.  Even with the volume of national polling conducted every day, most people report never being called by a pollster.  In a population of over 240 million adults, the odds of being called to participate in a survey, even ones with a relatively large sample like 2,000 people, are miniscule.  That is still true even if we account for the precipitous decline in the”response rate,” the proportion of households that yield a completed interview.  A wide array of technological and cultural factors have driven survey response rates to historic lows over the past few years as this table from Pew shows clearly:

In 2012, fewer than ten percent of households were represented in a typical poll.  Still, even at such a low response rate, the huge size of the United States population means that any individual has only a tiny chance of being selected from a sampling universe numbers of 24 million homes.  Even for a large survey of 2,000 people, the chance of any individual household being selected is a mere 0.000008.

Those odds change drastically when we narrow the universe of eligible people to “likely” voters in an upcoming New Hampshire Republican primary.  Even including people who claim they will vote but later do not, the total universe of eligible respondents in 2012 was probably just 300,000 people.   To reach that figure I started with the total of 248,485 ballots cast in the Republican primary.  To those voters we need to add the other people who reported that they would take part in the primary but did not actually turn out on Primary Day.  For our purposes, I have used an inflation factor of 20% which brings the estimated the total number of self-reported likely Republican primary voters to 298,182 people.  I rounded that figure up to 300,000 in the tables below.

Over a dozen polling organizations conducted at least one survey in New Hampshire according to the Pollster archive for the 2012 Republican primary.  In all there are 55 separate polls in the archive representing a total of  36,839 interviews, or about 12% of the universe of likely voters.  If all 300,000 likely Republican primary voters had been willing to cooperate with pollsters in 2012, about one in every eight of them would have been interviewed.  If we choose a much more realistic response rate like ten percent, there are actually fewer cooperating likely voters than the total number of surveys collected, so some respondents must be contributing multiple interviews.  Can we estimate how many there are?

It turns out the chances a person will be interviewed, once, twice, etc., or never at all can be modelled using the “Poisson distribution.”  Usually a statistical distribution relies on two quantities, its average and its “variance,” but the Poisson distribution has the attractive feature that the mean and variance are identical.  Thus we need only know the average number of interviews per prospect to estimate how many people completed none, one, two, or more interviews.  Here are estimates of the number of interviews conducted per potential respondent at different overall cooperation rates.  At a 20 percent cooperation rate, only 60,000 of the 300,000 likely voters are willing to complete an interview.  Dividing the number of interviews, 36,839, by the estimated number of prospects gives us an average figure of 0.614 interviews per prospect.


Now we plug those values into the Poisson formula to see how many people are interviewed multiple times during the campaign.


In an ideal world where every one of the 300,000 likely primary voters is willing to be interviewed, 88.4% of them would never be interviewed, 10.9% would complete one interview, and 0.7% would be interviewed twice.  If response rates fall to  8-10%, only 20-30% of likely voters are never interviewed.

Though only a few prospects would be interviewed more than once in the ideal, fully-cooperative world, at more realistic response rates closer to what Pew reports, many people were interviewed multiple times in the run up to the 2012 primary.  If only eight percent of likely voters were willing to complete an interview, about a quarter of the prospects were interviewed twice, and one in five of them were interviewed at least three times.

We can use those estimates to see how the size and composition of the actual survey samples change as a function of response rate.


At 100% cooperation, obtaining nearly 37,000 interviews from 300,000 people means a small number, about 2,000 people, would be interviewed twice merely by random chance.  So those 37,000 interviews represented the opinions of  32,000 people who were interviewed once, and another 2,000 people interviewed twice.  As response rates fall, the total number of unique respondents, the height of each bar, declines, with a larger share of interviews necessarily coming from people interviewed multiple times.  At a 10% response rate the proportion of people interviewed multiple times just about equals the proportion of people interviewed only once.  Below that rate the proportion of people interviewed only once declines quickly.

A Sea Change in Congressional Attitudes on Surveillance

About two years ago I posted an analysis of the vote in the House of Representatives on the so-called “Amash Amendment,” an attempt to restrict the bulk collection of telephone records by the National Security Agency. Observers at the time were startled when, despite widespread opposition from the Obama Adminstration and House leadership, the amendment failed on a close vote of just 217 to 205 in July, 2013.

This past week the House had another chance to vote on the question of NSA surveillance, and this time the House voted to put new restrictions on the mass collection of telephone data. In particular, the telephone and Internet providers will retain the data and provide it only in response to government requests. This so-called USA FREEDOM Act of 2015 passed the House by a vote of 338 to 88. Perhaps the May 7th decision by the Second Circuit Court of Appeals ruling the entire NSA program illegal encouraged the lop-sided outcome.

Opponents of the surveillance programs were quick to argue that this Act does not go far enough in terms of reining in the activities of the FBI and NSA. Justin Amash himself, the author of the 2013 amendment, voted against the FREEDOM Act, arguing that:

H.R. 2048 threatens to undo much of the progress resulting from the Second Circuit’s opinion. The bill’s sponsors, and unfortunately some outside advocacy groups, wrongly claim that H.R. 2048 ends “bulk” collection. It’s true that the bill ends the phone dragnet as we currently know it—by having the phone companies themselves hold, search, and analyze certain data at the request of the government, which is worse in many ways given the broader set of data the companies hold—but H.R. 2048 actually expands the statutory basis for the large-scale collection of most data.

Even House leaders who opposed his 2013 amendment supported the FREEDOM Act.  This included both House Majority Leader Kevin McCarthy and Minority Leader Nancy Pelosi.  (The Speaker typically does not cast roll-call votes and did not vote on the FREEDOM Act.  He did choose to vote against the Amash Amendment, perhaps to indicate his strong opposition to that provision.)

I have compared the votes on the FREEDOM Act with those on the Amash amendment two years ago.


The top half of the table presents the actual votes; the lower half presents the percentages based on the Members’ votes on the Amash Amendment.  Only seven Members voted against both provisions.  Those who opposed the Amendment in 2013 voted overwhelmingly, 94 percent, in favor of the FREEDOM Act.  Most of the opposition to the Act came from the supporters of the Amendment.  Nearly two out of five of those Members opposed the Act as did Amash himself.  New Members of the House also voted heavily in favor of the Act; only eleven of the 67 new Members in the 114th Congress voted no, or just 16 percent.

Is There a “Two-Term Itch?”

Over the weekend MSNBC commentator Steve Kornacki coined the phrase “two-term itch” to describe how Americans since World War II have usually opted to vote out the party of the presidential incumbent after two terms.  He concluded that this “history” might prove as great an obstacle to Hillary Clinton’s bid for the White House as any of her erstwhile Republican opponents.  Here is his evidence:


The record is quite impressive.  Only once since 1944, when George H. W. Bush beat Michael Dukakis in 1988, has the party of a two-term incumbent retained the Presidency in the following election.

On the other hand, we are only talking about a sample of seven elections, and hardly a random sample at that.  Though Mr. Kornacki’s hypothesis seems a reasonable one we might want to test it against the full range of American history.

If there is a “two-term itch,” we should expect to see it throughout the history of Presidential Administrations.  There is no reason to think voters should tire more quickly of Presidents and their parties today than they did in 1816. In that year Republican James Monroe succeeded two-term Republican President James Madison who himself took office after the two terms of Republican Thomas Jefferson.  Voters famously returned Franklin Delano Roosevelt to office four times. So while Mr. Kornacki’s hypothesis has some merit on its face, it is easy to think of counter-examples.  Just how often have American voters succumbed to the two-term itch and how often have they stuck with the horse they know?

Before we start we have to define the precise event we are trying to count.  For instance, the election of 1940 clearly qualifies as a test of the hypothesis, as it came eight years after FDR’s initial victory in 1932.  But what about the election of 1944?  Roosevelt was in the White House for the eight years prior to that election as well.  For analytical purposes I will count as a test of the “two-term itch” hypothesis any two consecutive terms where the same political party held the Presidency.  Using that definition gives us this list of twenty-four cases dating back to Washington’s departure from office in 1790.


Clearly voters since World War II have felt the “itch” much more often than voters in elections from our earlier history.  All told there are thirteen instances where the President’s party lost the office after two consecutive terms in the White House, compared to eleven instances where the President’s party held the Executive Branch, hardly different from fifty-fifty.  That includes counting Vice-President John Adams’s succession to Washington in favor of the itch theory. Woodrow Wilson’s unlikely election in 1912 alone generates two supporting cases. Roosevelt pursued his Bull Moose dream and split the natural Republican majority allowing Wilson to take the White House in 1912.  Eight years later the Republicans were unified and easily defeated Wilson’s successor James Cox.

Of those thirteen instances that meet the “itch” theory, seven of them come from elections after World War II.  Perhaps it is something about postwar American politics and presidential elections that has led to such frequent alternation in parties.

Under what conditions should we expect to find the parties alternating in office more frequently?  The obvious answer is during periods when the balance of power between the parties in the electorate is reasonably close.  The bigger the margin between the parties, the more often the dominant party will hold the Presidency.  Look back at the early years of America when Jefferson’s Republicans held the White House for the first forty years of the nineteenth century.  The contemporary Republican Party held sway over American politics for most of the years between the Civil War and the New Deal.  Since the War, though, the parties have been much more competitive.

We have popular vote totals for all the Presidential elections in the list above starting with John Quincy Adams’s victory in 1824.  If we calculate the average margin of victory in Presidential elections before and after Truman’s victory we get:

Average Margin of Victory in Presidential Elections Listed Above

1824-1944: 13.3%
1948-2012: 6.7%

The margin of victory in postwar elections is just half that in the earlier history of the Republic (and statistically different at any conventional level).  The more frequent alternation of parties we observe since FDR reflects the closer balance of support between Democrats and Republicans.  While this does not support Kornacki’s notion of a “two-term itch” being an historical feature of American politics, it is the “history” facing Hillary Clinton as she heads into 2016.


Everyone Has a Horse in the 2016 Race

Only fifteen percent of Republicans recently polled still undecided for 2016.

Already the Presidential primary season is upon us with another crowded field of contenders for the Republican nomination.  Though we are a year away from the actual primary season, pundits have already weighed in on the prospects for Jeb Bush, Scott Walker, Ted Cruz, Rand Paul, Chris Christie, and the rest.  I was curious to see how this year’s contest compared with the race for the 2012 nomination.

Using the archive of polls at RealClearPolitics for both 2012 and 2016, I charted trends for four groups of voters.  The dark blue line represents the “establishment” candidate, Romney in 2012 and Bush in 2016.  The red line plots support for two “social conservatives,” Rick Santorum in 2012, and Santorum plus Mike Huckabee in 2016.  Preferences for all the other candidates are summarized by the light blue line, while the grey line portrays undecideds.


At this time four years ago, about two-thirds of Republicans had no preferred candidate.  Once the debates began Republicans started choosing their champions.  Romney’s support improved during 2011, but most undecided Republicans were looking elsewhere.  At various times in 2011 Michelle Bachmann, Rick Perry and Herman Cain each zoomed to the top of the polls only to fall precipitously a few weeks later.  Newt Gingrich and Ron Paul showed more staying power with campaigns that persisted until the end of the calendar. Rick Santorum’s surprise tie with Romney in Iowa, followed by victories in the Minnesota and Colorado caucuses on February 7th, made the social conservative Romney’s last rival.  His victories in the March contests sealed Santorum’s fate.


If we look at how the early polling has gone for 2016, we see one striking change from four years ago.  Most Republicans today express a preference for one of the candidates, and the already diminished ranks of the undecided have fallen sharply since the end of last year. While only a third of Republicans had chosen a candidate at this point in the 2012 campaign, today more than five out of six already have someone they prefer.

Jeb Bush has been the beneficiary of some of those decisions.  He was polling a bit behind where Mitt Romney stood four years ago but has since improved to the high teens, about equal with Romney’s support in late March of 2011.  However, like Romney found after the first debates in 2011, most Republicans now choosing candidates are gravitating toward someone other than Bush.

The religious candidates start out from a much stronger base in 2014-2015 than they did in 2010-2011.  It was only after the 2012 Iowa Caucuses that Santorum’s support rose much above three or four percent.  This time around some ten to twelve percent of the Republicans polled pick Huckabee (mostly) or Santorum.  If we add Ben Carson to these social conservatives their collective level of support rises to about twenty percent of Republicans.

The race in 2010 shows how the array of “others” can quickly collapse into a two- or three-person race after the early contests.  Assuming Bush has the resources to try and follow Romney’s path, we should expect the race to narrow to Bush, a social conservative, and the survivor of the “anybody but Bush” contest.  With so many candidates running essentially even in the early polling, we might expect to see a succession of non-Bush alternatives climb to the mountaintop in the polls before being pushed down the slope by the next contender.

Still with the candidates much better identified in the voters’ minds so far before the debates in August, this “candidate-of-the-week” pattern may not recur in the 2016 race.  The share of undecided voters fell quickly four years ago because Republicans were largely indifferent among the alternatives to Romney.  That led to their switching their support to whomever appeared to be the leading alternative to Romney at the time.   It may prove more difficult to convert a Cruz, Walker or Carson supporter into a voter for Rubio.


Comparing Precipitation in States

The parched earth I see in televised news reports from California seems a universe removed from the lingering snow drifts outside my window near Boston. Here in the Northeast it feels like we have had more than our average share of rain and snowfall over the past decade.  That perception illustrates a larger reality.  Massachusetts has seen about five inches more precipitation on average since 1970 compared to levels dating back to the turn of the twentieth century.


No such good news seems to be in the cards for California.  Its average precipitation level runs about half that of Massachusetts, and unlike the Bay State, statistical tests show no trends or significant differences between older and more recent data.  The current drought appears driven by a succession of abnormally low years starting around 2007 and some historic temperature highs over the past three years. California reached its historical low for rainfall in 2013, a miniscule 7.93 inches.  In 2014, the temperature for California averaged four degrees (Fahrenheit) above the state’s 1901-2000 baseline. Temperatures in California have been running above the historical baseline for many years now.

Inches of precipitation give us the vertical dimension of a state’s total volume of rainfall; the other needed dimensions are measured by its area.  We can estimate the total volume of precipitation that fell on a state each year by multiplying together its inches of precipitation and its total area. Americans use the acre-foot as the standard unit of measure for large volumes of water.  An acre-foot represents the amount of water required to cover an acre-sized flat surface to a depth of one foot.  That works out to 43,560 cubic feet or about 326,000 gallons.

To estimate the total volume of precipitation for an average year in California, I’ll start with NOAA’s “baseline,” California’s average for the years 1901-2000 of 22.9 inches.  I divide that by twelve to convert to feet, then multiply the result by the area of California in acres to get

22.9 inches / (12 inches/foot) (158,648 sq mi)  (640 acres/sq mi) = 2.27 billion acre-feet

of precipitation in an average year.  For much smaller but wetter Massachusetts the comparable figure works out to about a tenth that for California

44.6 inches / (12 inches/foot) (8,262 sq mi)  (640 acres/sq mi) = 236 million acre-feet

Massachusetts is one-twentieth the size of California but sees about twice as much precipitation.  These factors offset to make the total volume for Massachusetts about a tenth that for the Golden State.

This calculation gives us a rough estimate of the “supply” of water available from precipitation within a state. Normalizing this total supply by the size of each state’s population gives us a metric we can use to compare water resources across states, their acre-feet of annual precipitation per capita. If we now divide that volume by the total number of people living in a state, we can compare these values across states and time like this:


At the beginning of the twentieth century Florida had the most abundant precipitation volumes per capita and Massachusetts the smallest.  Massachusetts also had a larger population (2.8 million) than any of the other states in the chart except Texas (3.0 million).  The graph portrays how the rapid growth of the Sunbelt destinations like California, Florida, and Texas reduced the per-capita supply of precipitation.  The other three states show slower declines in the availability of water because of their lower rates of population growth.  Nebraska’s combination of a large surface area and small population gives it ample supplies of water to fulfill its citizens’ needs and those of agriculture.  Massachusetts shows that even a small state with consequently limited water resources can manage if population grows slowly enough.  The state that clearly stands out the most is California.

Per-capita precipitation in the Golden State has now declined to Massachusetts levels, but California’s entire economy and society is geared to high-demand activities from almond groves to swimming pools.  Forty percent of California’s water is devoted to agriculture.  In Massachusetts that figure is less than ten percent, and most of that goes to irrigate cranberry bogs.  On its current path, California faces some stunningly difficult policy choices in the years ahead to adapt its economy to precipitation levels like those seen in Massachusetts.

Millennials Still Eschew Guns

The 2014 General Social Survey is now publicly available, so I have updated the charts that appear in my earlier postings on gun ownership by age and “generation.”

Younger millennials still show the lowest rates of gun ownership of any group in the survey.  Older millennials look more like their GenX age peers.  As I discussed in my earlier pieces on this subject, this could be a “life-cycle” effect where people buy guns as they age, though older generations did not show such trends.

It is still the case that millennials under thirty who were interviewed in 2012 and 2014 show higher ownership rates than those interviewed in 2008 or 2010.


However the growth in ownership we saw among the youngest groups between 2008 and 2012 seems to have reached a plateau in 2014.

The Rhythm of Senate Elections

From reading media reports of the 2014 election results you might believe the nation has experienced a political change of cataclysmic proportions. Republicans won 23 of the 36 states where a senatorial election was held, enough to give them control of the Senate for the next two years. Yet we need look back only to another strong Republican year, 2010, to see nearly identical results. In that year the Republicans took 24 of the 37 states where elections were held.

Historically the parties’ shares of Senate elections have swung back and forth quite substantially with the last decade appearing unusually unstable.  Here are the results for Senate elections back to 1936:



The Democrats reached their peak in the 1964 Johnson landslide, though this election merely confirmed the Democrats dominance of the Senate “class” elected six years during the 1958 Eisenhower recession.  Republicans have won about two-thirds of the seats in half-a-dozen elections over the same period of time.  The 2014 result is quite similar to the Republican margins in 2010, 2002, 1980, 1952, and 1946.

If you look carefully at this graph, you’ll see a certain rhythm in these results, one created by the six-year length of a Senatorial term and the power of incumbency.  In fact, if we slide the graph forward six years and superimpose the results, we get this:


Now we see how the partisan split in a Senate “class” helps explain the variation from election to election.  Because incumbents have an advantage when it comes time for re-election, the partisan composition of a Senate class tends to repeat at six-year intervals.  The Republican edge in 1946 was replicated six years later when Eisenhower won the White House. Six years after that a major political shift occurs.  The largely Republican class of 1946 and 1952 was replaced with a largely Democratic class during the Democrats sweep of the 1958 off-year elections. The Republicans’ victories in 1980 constituted a similar shift for their party.

Of course the partisanship of the class facing re-election just sets the stage on which each year’s set of electoral forces plays out.  We would expect that factors like broader trends in partisanship and the state of the economy might also influence the outcome of Senate elections.  I turn to those influences in the next article.