It’s Not All Name Recognition

The polling organizations at both Quinnipiac and Monmouth Colleges this week released “favorability” scores for all the Democratic candidates. Respondents are asked “is your opinion of [person] favorable, unfavorable, or haven’t you heard enough about him/her?” These provide a richer view of the state of the race than the simple who would you vote for today type of question.

In general, the better-known candidates are also the better-liked.  In the chart above the percentage of likely Democratic voters able to rate a candidate appears on the horizontal axis.  The vertical axis measures “net favorability,” the difference between the percent of the voters rating a candidate favorably and those rating the candidate unfavorably.  The figures in the chart represent the averages of the two polls.  The regression equation in the upper-left-hand corner of the chart shows that a ten percent increase in exposure brings the average candidate a net +7 increase in favorability.

At the top of the rankings is, no surprise, Joe Biden. 92 percent of the Democrats polled could give an assessment of Biden, and he scored at the top of the list in favorability with 76 percent favorable versus just 15 unfavorable.  Bernie Sanders is nearly as well known (89 percent) as Biden but not as well liked, with a net favorability score of 47. Two other candidates join Sanders at just under fifty percent favorability, Elizabeth Warren and Kamala Harris.  Harris’s favorability, however, substantially exceeds the value we would predict given her familiarity score. At the other end of the spectrum is New York mayor Bill deBlasio.  About half the respondents said they knew him well enough to give him a rating; unfortunately for him only an average of 16 percent of the Democrats in the two polls viewed him favorably versus 32 percent who viewed him unfavorably. (Removing him from the regression increases R2 from 0.76 to 0.92, and reduces the standard error from 9.9 points to 5.4. The slope is largely unchanged, but the intercept naturally moves slightly upward since it no longer needs to incorporate deBlasio’s negative score.)

Here are the actual and predicted net favorability scores for every candidate from the model where deBlasio is omitted. Harris is 10 points ahead in terms of favorability than her exposure predicts.  She’s followed by Pete Buttigieg and Eric Swalwell at around six percent.  (Swalwell’s frequent appearances on MSNBC might have something to do with this.) At the other end of the spectrum is the remarkably poor showing for Beto O’Rourke. Fifty-five percent of Democratic voters say they can score Beto, but his net 21 percent favorability is nearly nine points what we would expect to see given his familiarity.  Sanders’s unfavorable numbers also put him near the bottom of this list.  89 percent of Democrats know enough about Sanders to give him a favorability score, but his 47 percent net favorability lags about eight points behind what we would expect given how well known he is.

Gerrymandering: Finding the Deviant Elections

During oral argument in Rucho v. Common Cause, the North Carolina gerrymandering case, the Supreme Court and the attorneys for the parties considered a variety of criteria to identify what Justice Stephen Breyer called “real outliers” in terms of election results. In the first article of this series I considered his half-the-vote/one-third-the seats criterion. In the last posting I considered the notion of measuring deviations from some predicted baseline result. In that article I proposed a formula for estimating the baseline based on historical voting data:

Expected % Democratic Seats = 2 X (% Democratic Two-Party Votes) – 50

This formula provides a simple, yet historically accurate baseline for estimating the share of seats we should expect the Democrats to win given their share of the Congressional vote statewide. (I should note that this formula is entirely symmetric. We could use the Republican vote and seat shares and get the identical result.) Armed with a method for determining the baseline prediction, I turn now to a method for identifying deviant electoral results.

Measuring the Deviation

How big a deviation from that baseline should be considered “significant” depends on both statistical and legal/constitutional criteria. I will only be talking about “significance” in the statistical sense. As we’ll see, the size of the deviation you are willing to tolerate depends on the proportion of outcomes you consider to be possibly unconstitutional.  In that sense, Justice Potter Stewart’s famous comment about identifying pornography, “I know it when I see it,” applies to gerrymandering just as well.

In the discussion about proportionality, plaintiff’s attorney Paul Stewart suggested and dismissed a “one standard deviation” away from some baseline criterion for gerrymandering. I have dealt with his objection concerning estimating a baseline result, but just one standard deviation is much too low a bar. As this graph shows, about 32% of elections should fall outside the one-standard-deviation criterion, many too many to qualify for judicial review. Statisticians often use two standard deviations as a minimal criterion for “statistical significance.” That would subject about five percent of the elections to additional scrutiny. Justice Breyer’s criterion works out to about one election in a thousand, which corresponds to a standard deviation difference of about 2.5.

Now it turns out the regression method also generates an estimate of the “standard deviation” of the predicted values. This quantity is called the “standard error,” and for the regression using state-years as the unit of analysis, the estimated standard error for the percent of seats won is 10.2.  So, using two standard errors as a minimum criterion, we should look for results where the difference between the actual number of seats won, and the prediction from the formula above, is at least 20 percent.  Here are the elections held since 2010 where the actual outcome differs from the predicted value by a least 20 percent. The “standardized deviation” column measures the absolute value of the quantity (Actual – Predicted)/(Standard Error).  The larger the value the further the election deviated from the prediction. Using the absolute value treats both parties symmetrically.

All three elections identified by the “Breyer criterion” also appear in this list. However there are a number of elections that fail his criterion, but where the actual seat outcome differs from the predicted value by at least two standard errors.  Connecticut persistently sent five Democrats to Congress since 2010 when the vote suggests there should have been at least one Republican in the delegation. Connecticut would not be identified by Breyer’s criterion, but a reasonable observer would conclude that state’s Congressional district lines appear to have been gerrymandered in the Democrats’ favor. Democrats also got “too many” seats in Maryland in 2014, but that pattern did not recur in other elections since the 2010 Census. Similar “one-offs” like VA12, NJ18, and MI12 might also be attributed to chance rather than systematic discrimination via gerrymandering.

Most of the other elections in the list show an excess number of Republicans winning House seats given the statewide vote for Democrats. Both North Carolina and Pennsylvania appear twice as does Ohio, whose map was just thrown out by a Federal court.

By either Breyer’s criterion or by measuring deviations from a predicted baseline, the map created by the North Carolina legislature qualifies as gerrymandered. Ohio and Connecticut also deserve judicial scrutiny.


Gerrymandering and “Proportionality:” Setting the Baseline

In my last post I considered what I called the “Breyer criterion” for identifying partisan gerrymandering — a party winning half the vote in a state receives only a third of the seats.  That criterion identified just seven races out of the nearly eight hundred I examined, or just 0.9 percent of the state-level elections to Congress where candidates of both major parties stood. Breyer proposed his criterion to identify “real outliers,” elections that are “really extraordinary.” A one-in-a-thousand criterion probably fits that definition.

However the Court also discussed the general concept of how to measure “proportionality” between seats and votes. The attorney for the plaintiffs, Paul Clement, brought up the notion of a “one standard deviation from proportional representation” criterion mostly as a straw horse. Leaving aside his use of “proportional representation,” which as the oral argument shows is fraught with constitutional issues, Clement then claimed that it is impossible to know what the correct baseline should be from which to measure seat outcomes.

So I think the fundamental problem is there is no one standard deviation from proportional representation clause in the Constitution. And, indeed, you can’t talk even generally about outliers or extremity unless you know what it is you’re deviating from.

Clement’s argument ignores decades of political science research into the relationship between votes won and seats awarded.  Studies dating back to at least 1948 have theorized about and examined empirically the relationship between seats and votes.

Measuring the Baseline

I’ve written a number of times about the relationship between votes won and seats awarded in “first-past-the-post” or “plurality” electoral systems like ours.  These types of electoral systems routinely award the majority winner of the vote a disproportionately greater share of seats. Here is a simple example, using national electoral results for Congress.

The dark blue line represents the “best-fit” relationship between the percent of votes won by the Democrats in each election year and the percent of House seats the party won using simple “ordinary least squares” regression. The historical relationship is substantially steeper than the thin line in the chart representing parity, or when a party’s share of seats equals its share of votes.1

Using simple regression the equation that best describes this relationship is, in round numbers,2

% Seats Democrat = 2 X (% Votes Democrat) – 50

So, for instance, in a year when the Democrats win 55 percent of the vote, they should receive on average (2 X 55) – 50 = 60 percent of the seats.

Since gerrymanders take place at the state level, data from national elections do not provide the correct basis for determining whether a particular state’s election deviated “too far” from some predicted baseline. To develop such a baseline for Congressional elections I turn again to the MIT database of Congressional races I used in the preceding blog post.  Here is the relationship between votes and seats for state-year combinations. Each point represents a general election in a given state in a particular year, like Alabama in 1976.

A number of races resulted in one party or the other winning all the seats. These unanimous outcomes pose mathematical problems for our method, so I excluded those 84 races in the calculation of the slope and intercept for the regression line in the chart.

(The horizontal lines come from states with small numbers of districts where the number of outcomes is mathematically restricted. For instance, a state with four districts will often return a 3-1 result for one party. That leads to clustering at values of 25 or 75 percent.)

Using state-level election results gives us a model that is numerically quite similar to the simple method based on election years above:

% Seats Democrat = 2.3 X (% Votes Democrat) – 66

Here the slope of the line is slightly steeper than two and the intercept slightly more negative. In practice, though, the difference between these results and predictions using the simpler model from national-level data are negligible. The lines are so close that I could not represent them both on the chart.

Given the convergence between these two sets of estimates, I propose that

The best “baseline” estimate for the division of seats given the division of the vote in state-level Congressional elections is

% Seats Democrat = 2 X (% Votes Democrat) – 50

That formula uses simple numbers like two and fifty and produces results nearly identical to those using the estimated regression coefficients of 2.3 and -66.

The regression method also produces a measure of the “standard deviation” of actual outcomes around the predicted values. I use that quantity in the next post to identify potential gerrymanders using the deviation from proportionality method.




1The results for the last two Democratic off-year House victories, retaking the chamber in 2006 and 2018, both fall on this parity line. Given the historical relationship, the Democrats did not receive the usual reward in the House for their victories in the popular vote. The elections in 2012 and 2018 also show significant negative effects for Democrats.

2Complete results for both models here.

Gerrymandering and the “Breyer Criterion”

On March 26th the Supreme Court heard oral argument in Rucho v. Common Cause. The case concerns whether North Carolina’s post-2010 electoral map so disadvantages Democratic candidates that it should be ruled unconstitutional. This case raises many Constitutional and legal issues that fall outside the purview of this blog; for instance, whether the Republican-controlled North Carolina legislature showed the intent to discriminate against Democrats in their choice of map. However some of the issues raised during oral argument lend themselves to empirical examination.

A persistent concern during oral argument was whether “proportionality” should be used as a Constitutional standard to determine if a particular electoral outcome might be ruled unconstitutional.  In one of these discussions Justice Stephen Breyer proposed that “when a party wins a majority of the votes in a state, … but the other party gets more than two-thirds of the seats” the result could be declared unconstitutional.

How frequently might Justice Breyer’s criterion apply to actual state-level results comparing votes cast for Congress and the proportion of seats awarded? The Court has an incentive to establish a highly-restrictive criterion to deter future filings by state parties hoping to overturn an unfortunate result. How restrictive is the Breyer criterion? How often might we see electoral results flagged as potentially unconstitutional by the workings of this rule?

What Elections to Analyze

To address these questions, I begin with an invaluable dataset compiled by the MIT Election Lab. It comprises election returns for all candidates who ran for Congress between 1976 and 2018. Using these candidate records as a basis, I created a new aggregated dataset containing results by party for each combination of state and election year.

In the process I eliminated a number of records from consideration. First, because it is impossible to gerrymander a state with just one Congressional district, I excluded any state-year combinations when the state was apportioned into a single district. Examples include Alaska and Wyoming throughout the 1976-2018 period, and states like Montana and Nevada in the years when they had but one district.

I further eliminated states with just two Congressional districts. In those cases an election would fit the criterion if one party won over half the vote and lost both seats. However that outcome would occur by random chance a quarter of the time if both seats had even odds of going to either party.  Courts would likely not be willing to rule a particular seat distribution was unconstitutional when the result could have happened by chance a quarter of the time. As a result I also removed state-years when the state was apportioned only two seats.

Even this set of races needs further refinement to use as a basis to examine Breyer’s criterion. The canonical notion of a two-party race between a Democrat and a Republican dissolves once we look at the data.  Most races include minor candidates and not every seat has both a Democratic and a Republican contender.  Many seats were left uncontested over this period by one or the other major party, especially in the South.  And with the introduction of “top-two” voting in California and Washington, general elections can pit two Democrats or two Republicans against one another.

So I further limited the sample by selecting only Congressional elections with both a Democratic and a Republican contender. That left a total of 7,701 eligible races which I then aggregated to the level of state-years, e.g., Alabama in 1976. Some state-year combinations then had fewer than three contested races; those observations were also excluded. That left me with a total of 799 state-years for the analysis to follow.

Justice Breyer’s Criterion

So in this sample of nearly eight hundred Congressional outcomes, how often do we find the particularly egregious combination where a party won at least half the Congressional vote in a state but was awarded fewer than a third of the seats.

In practice Breyer’s criterion turns out to be highly restrictive.  Of the 799 Congressional elections that qualified for my sample, only seven (0.9 percent) would have fit his rule.  Moreover, only four seats were contested in the three Alabama races and the one in South Carolina. Assuming even odds of each seat electing a Democrat, but a Democratic majority overall, the chance of getting an outcome with at least three Republican seats is 1/8.1 Intuitively that seems too low a bar for declaring a particular result unconstitutional.

Of more interest is that three of the seven Alabama seats, and two seats in the South Carolina race, were uncontested. The totals for these states represent the votes cast and seats awarded in the contested districts. Leaving seats uncontested may itself be an indicator of gerrymandering, If maps are too distorted, it may make little sense for a party to invest resources in races where their opponents are certain to be victorious.

Pennsylvania and North Carolina are another story entirely though.

Breyer’s criterion flags three elections in those states. all of which took place after the 2010 Census. Since then both states have become poster children for gerrymandering. The Pennsylvania map that took effect in 2012 awarded Republicans fully thirteen of the state’s eighteen seats while the Democrats won the popular vote statewide by a small margin. The Pennsylvania State Supreme Court ruled in January, 2018, that the Congressional map was so unfair that it violated the state’s own Constitution. The Court threw out the map and later that month commissioned Stanford Law School professor Nate Persily to draw a new one.  The 2018 election using the redrawn district lines resulted in a 9-9 tie, compared to the 13-5 advantage Republicans had maintained since 2010.

North Carolina is, of course, the state at issue in Rucho v Common Cause, so it is appropriate that it should be flagged here as well. Twice since the 2010 Census have the Democrats won a small majority of the popular vote, but were awarded only three or four of the state’s thirteen Congressional seats. So if Justice Breyer wanted to establish a criterion that would pick out the most egregious partisan gerrymanders, his one-half the vote/one-third the seats rule seems to fit the requirement.

Justice Breyer’s rule was not the only criterion discussed in oral arguments that day. Both plaintiff’s attorney Paul Clement and Justice Neal Gorsuch discussed a measure based on the difference between a state’s actual seat distribution and some measure of what its “proportionate” share might be. I turn to that subject in my next posting.


1Imagine a state with four districts. In three of them the Democrats and Republicans tie. In the fourth seat the Democrats win by one. That gives them a one-vote majority in the popular vote and one seat. If we flip a coin for each of the three tied districts, a result with three Republicans occurs one time in eight. I  thank my friend Jim Stodder for making me rethink the calculation of this probablity.

Guns Still Hold Little Allure for Younger Americans and Women

Thirty-six percent of 18-49 households with a man present own guns as compared to just sixteen percent in those households with no man in residence. Gun ownership has persistently fallen across the generations, but the proportion of people who own a gun in any specific cohort remains fairly stable over the cohort’s lifetime.

With the release this week of the results from the 2018 NORC General Social Survey it seemed a good time to revisit my analyses of gun ownership patterns across the generations. This table replicates the ones found in those earlier reports with the sample expanded to include all interviews from 1972 through 2018.

Here are the ownership rates for households based on the respondent’s age cohort, or “generation,” and the respondent’s actual age at the time of the interview.  This method lets us see differences among generational cohorts and also measure how gun ownership changes as people age.  There are few representatives of “Generation Z” in the newest data, those who born in 1997 or later and came of age beginning in 2006.

Once again the major findings are clear. Gun ownership has persistently fallen across the generations, but the proportion of people who own a gun in any specific cohort remains fairly stable over the cohort’s lifetime. Ownership falls off among the elderly, in part because men, who are more likely to own guns, have a shorter life expectancy than women.

The oldest generations have shown little change in gun ownership rates as their members aged. That may not be true for Millennials who may be buying guns as they get older.  Even with that uptick among the oldest Millennials, they still remain below their Generation-X forbears.

Pretty much all this growth in ownership among Millennials took place in households with a man present. Left to their own devices, women are much less likely to own a gun. This table presents the rate of gun ownership by type of household.  I have limited it to adults 18-49 interviewed in 2000 or later.  This pool of younger, more recent respondents represents the future of gun ownership.

Thirty-six percent of younger households with a man present own guns as compared to just sixteen percent of households where no man resides. Married households show the highest rates of gun ownership, but unmarried men own guns at nearly the same rate. At the other end of the spectrum, single, unmarried women, and all women with children, have especially low ownership rates of just thirteen and eleven percent. In stark contrast, twenty-three percent of households with two or more women living alone together own guns. That is not a statistical anomaly; it is based on a sample of 833 households and is significantly greater than the 11-13% figures for the other two groups.

While the National Rifle Association and other gun-advocacy groups have targeted women over the past few years, these results suggest their efforts have not borne much fruit among the younger women in this country.



Win Early or Go Home

Only twice in nine elections has the eventual Democratic nominee not won one of Iowa or New Hampshire. Iowa remains fairly wide open, but Bernie Sanders may have the inside route in New Hampshire.

The pace of attrition is swift in Presidential primaries. Usually only a few candidates remain standing after March 1st, so a victory in Iowa or New Hampshire can put a candidate far ahead of his or her rivals.  In fact the eventual Democratic nominee won one or both of these states in seven of the past nine contested Presidential nominations.

We can see a few different patterns over these campaigns. In 2000 and 2004 the “establishment” candidates, Al Gore and John Kerry, began strong and finished strong. Hillary Clinton’s campaigns resemble Walter Mondale’s effort in 1984.  Both Clinton and Mondale were establishment choices but faced strong opposition from outsiders Gary Hart, Barack Obama, and Bernie Sanders.

2020 has no establishment Democratic candidate, so might it follow one of the other patterns?

Regional Appeals in Iowa and New Hampshire

In the two outlier elections, 1972 and 1992, the strong regional appeal of a few candidates masked the underlying strength of the eventual nominee.

Maine Senator Edmund Muskie came second in Iowa in 1972 by just 0.3% of the vote behind “uncommitted.” When he arrived in neighboring New Hampshire, Muskie was expected to build on his Iowa win with a strong victory among his fellow New Englanders. Sadly for Muskie, he soon found himself the target of an infamous smear campaign involving confederates of Richard Nixon and the then virulently right-wing newspaper, the Manchester Union Leader. Confronted with a barrage of falsehoods, Muskie delivered an emotional speech, dubbed the “crying speech” in some quarters, outside the offices of the Union Leader.  Muskie did win New Hampshire and the Illinois primary a few weeks later, but it was the second-place finisher in the Granite State, George McGovern, who went on to win the 1972 Presidential Nomination.

In 1992 Iowa’s senior Senator, Tom Harkin, tossed his hat into the ring and promptly collected 77% of the Iowa caucus vote. After Super Tuesday he never contended again. New Hampshire that year was won by the junior Senator from neighboring Massachusetts, Paul Tsongas, but it was the strong second-place performance of Arkansas governor Bill Clinton that caught the eye of political observers. Tsongas beat Clinton by only 33% to 25%. The “Comeback Kid,” as Clinton billed himself, went on to win a number of primaries on Super Tuesday and eventually consolidated his hold on the nomination.

Regional Appeals in 2020?

In an earlier article I discussed Iowa‘s apparent preference for Midwestern candidates.  Sherrod Brown’s decision not to run for President leaves only Amy Klobuchar, the senior Senator from Minnesota, and South Bend, Indiana, mayor Pete Buttigieg as candidates who can lay claim to Midwestern roots.  So far, though, neither candidate has caught on in the early Iowa polling.  Klobuchar polls in the single digits, and Buttigieg has yet to break onto the list.

Averaging the current Iowa polls gives former Vice President Joe Biden the lead at 27.3%, followed by Sanders at 15.4%, and California Senator Kamala Harris, former Texas Congressman “Beto” O’Rourke, and Massachusetts Senator Elizabeth Warren all with about ten percent. Name recognition obviously has a substantial influence on these figures, and the race could change substantially should Biden decide not to run.

At the moment, though, Iowa seems fairly open.  That is less true for New Hampshire.

Bernie Sanders won a dramatic upset victory over Hillary Clinton in the 2016 New Hampshire primary by a margin of 60-38%.  There is no reason to expect him not to do well again in 2020.  His current polling average in the Granite State is about 21%, not far behind Joe Biden’s 25%.  Harris and Warren come in around 9-10%.

New Hampshire has certainly preferred fellow New Englanders over the years. Candidates from neighboring states have won five of the eight competitive New Hampshire primaries held starting in 1972. Despite his travails, Muskie did win the 1972 primary, as did Massachusetts Governor Michael Dukakis in 1988 and Massachusetts Senator Paul Tsongas in 1992. New Hampshire voters in 2004 spread their votes across three candidates from New England including eventual nominee Massachusetts Senator John Kerry.

Yet with the exception of Sanders impressive victory in 2016, candidates from neighboring states have not won New Hampshire by overwhelming margins. And both Muskie’s and Sanders’s victories came in largely two-person races, which will certainly not be true next year.

Sanders’s strength in New Hampshire comes at the expense of Elizabeth Warren.  Together Sanders and Warren poll in the low to mid thirties, or about the total vote won by Dukakis and Tsongas. However Sanders outpolls Warren 21-9 on average. If Warren hopes to follow in the steps of Howard Dean in 2004 she’ll need to up her game.

All told, though, the chances look good that Bernie Sanders could win the New Hampshire primary again and secure one of those two valuable slots in the early states.

Too Soon for Impeachment?

House Democrats have been resisting pressures from the most intensely anti-Trump part of their party who think impeachment should begin now. Congressional Democrats have instead taken a more institutionalist approach. Using its Article I oversight powers the House has opened an array of investigations by various Committees like Intelligence, Judiciary, and Oversight.

Just last weekend Judiciary Committee Chair Jerrold Nadler remarked, “Before you impeach somebody, you have to persuade the American public that it ought to happen. You have to persuade enough of the – of the opposition party voters.” Nadler’s concerns are supported by the results of a new Quinnipiac poll. 59% of Americans oppose beginning the impeachment process compared to 35% in favor. However fully 66% of Democrats support starting the impeachment process now.

From this poll, Chairman Nadler’s goal of persuading members of the opposite party seems far away.  Only six percent of Republicans favor impeachment now, and just a third think Robert Mueller is conducting a “fair” investigation.

Very different outcomes awaited the two Presidents who have been impeached in my lifetime. Richard Nixon forced to resign his office in 1974 rather than face a Senate trial. Bill Clinton was tried and acquitted by a Senate with a Republican majority. Nixon’s popularity collapsed over 1973; Clinton appeared impervious to his travails.

Public Support for Richard Nixon

This chart plots “net” presidential job-approval (approve minus disapprove) for Presidents Nixon and Clinton using data from the Gallup Poll. To put them on a common time scale, the x-axis measures days elapsed since Inauguration.

Richard Nixon was re-elected President in 1972 with 60.7 percent of the popular vote, and polls taken in the weeks just after the inauguration found about 65 percent approved of Nixon’s performance, while just 25 percent disapproved.  That net approval rating of +40 exceeds Clinton’s own impressive starting position by ten percentage points.  However the public’s honeymoon with Richard Nixon ended quickly and dramatically over the months of 1973, while their love affair with Clinton actually grew stronger.

Just ten days after Nixon’s second inauguration, all seven of the Watergate burglars had either pleaded guilty or been convicted of their crimes.  About a week later the Senate voted 77-0 to create a Select Committee on Presidential Campaign Activities under the chairmanship of Sam Ervin.  The White House tried to slow the pace of events by firing three key advisors, Chief of Staff H. R. Haldeman, Domestic Counselor John Ehrlichman, and White House Counsel John Dean, on April 30th (day 100 in the chart). A few weeks later Dean began his stunning testimony before the Select Committee (day 156) followed by the even more stunning revelation from Deputy Assistant to the President Alexander Butterfield that all Oval Office conversations had been recorded. Legal wrangling over the tapes ended up before the Supreme Court.

The pale red line marks the so-called “Saturday Night Massacre.” On October 20, 1973, Nixon instructed Attorney General Elliot Richardson to fire Watergate Special Prosecutor Archibald Cox. Richardson and his deputy, William Ruckelshaus, both resigned rather than comply with the President’s request, leaving the task up to Solicitor General Robert Bork. Despite Cox’s ouster, it took only eleven days before Nixon was pressured to appoint a new Special Prosecutor, Leon Jaworski.  Nixon’s popularity continued to decline after the Massacre, but not at the astounding pace it had fallen in the months before.

Public Support for Bill Clinton

The contrast between Richard Nixon and Bill Clinton could not be more stark. Clinton’s began his second term with a 30-point margin in net approval. That margin actually increased over his term despite Independent Counsel Kenneth Starr’s repeated investigations. His original remit was the so-called “Whitewater” investigation concerning a real-estate deal in Arkansas. That investigation expanded to include “Travelgate” concerning alleged corruption in the White House Travel Office. Starr even took seriously right-wing claims about the alleged murder and subsequent cover-up of Clinton aide Vince Foster. Starr later concurred with the verdict that Foster’s death was a suicide.

Ultimately, though, Starr’s investigations led him to Clinton’s alleged sexual escapades with Paula Jones and Monica Lewinsky. Starr subpoenaed Clinton to testify before the grand jury, where the President meditated on the meaning of “is.” Starr believed Clinton committed perjury during the grand jury hearing and reported this finding to Congress where it became the basis for the House impeachment proceedings that began on December 19th. To the perjury charge the House later added an obstruction of justice count and sent the Bill of Impeachment to the Senate where a trial began on January 7, 1999.

Despite all this furor, Clinton’s net job-approval rating hovered between +25 and +35 from his second inaugural through impeachment. Ironically, Clinton’s best performance on Gallup’s job-approval question came at the same time as the Senate began to deliberate on the two impeachment counts. Opinion did fluctuate more during the impeachment process, but average net approval remained roughly constant throughout. (If only public opinion mattered, the Republicans’ decision to impeach Clinton makes little sense. Perhaps Republican leaders believed they could hold their caucus together in the Senate, but Clinton’s overwhelming popularity made it easier for defectors like Susan Collins (R-ME) and Richard Shelby (R-SC) to cross the aisle on Clinton’s behalf.

Implications for a Trump Impeachment

If we add Donald Trump’s net job approval, two things stand out right away. While more people disapprove than approve of Trump’s performance in office, he has not reached the depths of unpopularity that we saw when Richard Nixon was President. Second, like “Slick Willie” Clinton, Donald Trump’s popularity seems remarkably impervious to events. In such circumstances, it makes sense for Democratic leaders to take their time pursuing impeachment.  The public is not ready.

Can Midwesterner Amy Klobuchar Find Support in Iowa?

Iowa Democrats’ preference for Midwesterners may work to Amy Klobuchar’s advantage.

Whatever strategic reasoning influenced Amy Klobuchar to announce her almost-certain decision to run for President, the first-in-the-nation Iowa caucuses surely must have played a role.  Historically, Midwestern candidates have had a slight advantage in these contests.

No Midwestern candidates competed in the 2016 caucuses.

In 1988, both Representative Richard Gephardt of Missouri and Senator Paul Simon of Illinois out-polled Massachusetts Governor Michael Dukakis, the eventual nominee.  Four years later Iowa’s own Senator Tom Harkin took a run at the presidency and swamped his competitors.  Gephardt ran again in 2004, but both he and perennial left-wing challenger Ohio Congressman Dennis Kucinich could only muster twelve percent between them against their major competitors, Senators John Kerry from Massachusetts and John Edwards of North Carolina.  Four years later the candidate from neighboring Illinois, Senator Barack Obama, bested both Edwards and Hillary Clinton in the Iowa caucuses.

Iowans seem to display a slight preference for viable Midwestern candidates. Harkin’s favorite-son endorsement in 1992 needs to be set aside, but both Gephardt and Simon showed considerable strength in Iowa. Even Barack Obama’s victory in 2008 fits the pattern. While his position as a Senator from Illinois was hardly his most noteworthy feature as a candidate, it might have swayed some Iowans who might otherwise have voted for Edwards or Clinton.  Unfortunately the 2016 campaign did not provide any mid-Western contenders whose performance we might evaluate.

The down side for Klobuchar is that, among these Midwestern candidates, only Barack Obama won the nomination.  On the other hand, the winner of the Iowa caucuses in the past three competitive primaries, Kerry, Obama, and Hillary Clinton, went on to become the nominee. Can Klobuchar employ “Minnesota nice” to attract a similar plurality among Iowans next February?  In the two Iowa polls released so far she’s running at three percent.  Even in a field as crowded as 2020’s, she’d need to win well over a quarter of the caucus vote to have a chance at victory.

Can Trump Recover?

Compared to recent presidents, only George H. W. Bush was as unpopular when standing for re-election as Donald Trump is today. Bush lost.

Recent opinion polls, especially ones taken after the government shutdown had worn on for a few weeks, show Donald Trump considerably “underwater” in job-approval polls. On December 21st, the day before the shutdown, the running average on Trump’s job approval numbers at FiveThirtyEight put him at 42.2 percent approving, and 52.7 percent disapproving, for a “net approval” score of -10.5 (=42.2-52.7).

Since then the gap between Trump’s approval and disapproval scores has substantially widened. Today, January 27th, FiveThirtyEight reports his approval score has fallen three points to 39.3 percent, while his disapproval score has increased substantially to 56.0 percent. His net approval score has now fallen to -16.7.

Few recent Presidents have plumbed these depths of public opinion as Donald Trump. This chart presents the net approval scores of recent Presidents as measured by Gallup at three different times in their first terms: when inaugurated, at the time of the first midterm election, and once again at the time of the next Presidential election.

Every President except Trump began his term of office with a positive net approval score that deteriorated over the next two years. George W. Bush showed the smallest decline because his approval had received a thirty-point boost after the 9/11 attacks.   Being popular upon inauguration is no guarantee of continued popularity as Ronald Reagan and, especially, Barack Obama discovered. However they were both able to recover and reach positive territory in the Presidential election that followed.

Only George H.W. Bush has stood for re-election with a net approval score below negative twenty like Trump’s current figure. Bush, of course, lost the 1992 election to Bill Clinton. George W. Bush managed to be re-elected with a net approval score of zero. The remaining Presidents in the chart all had positive approval scores at the time of re-election and were sent back to Washington for a second term.

Donald Trump has not seen a net approval score above zero since a week or two after he was inaugurated in 2017.  It is hard to fathom how he can recover even to a net zero score like George W. Bush had in 2004, never mind reaching into positive territory. Both Clinton and Obama managed to win re-election with net approval scores in the single digits.  Even that relatively low hurdle seems pretty distant for Donald Trump at this point.

One reason we are unlikely to see a substantial movement in Trump’s direction is the large number of people who report “strongly” disapproving of Trump’s performance. The newest Washington Post poll shows that most peoples’ opinions of Trump fall into either the strongly approve (28 percent) or strongly disapprove (49 percent) categories. Only nine percent report they “somewhat” approve of his performance, and another nine percent “somewhat” disapprove. 

Obama could recover from his 2010 “shellacking” because fewer people chose the “strongly disapprove” option when asked about him prior to the 2010 midterms. Trump has fewer chances to bring people back to his side because opinions about him are much more locked in stone.

Donald Trump could, of course, confound all his observers as he did in 2016. However it is unlikely he will be able to run against as unpopular a candidate as Hillary Clinton turned out to be.