Please welcome the newest contributor to High School SCOTUS: Jason Frey!
Jason is a high school junior and Supreme Court enthusiast from Natick, Massachusetts. Below is his first post, which explores a new method of measuring partisan gerrymandering as it relates to Rucho v. Common Cause (2019). He was inspired to write this because his final AP Government project last year entailed studying gerrymandering in Massachusetts.
Rucho v. Common Cause and the Partisan Bias Measurement
On March 26, the Supreme Court heard oral arguments in Rucho v. Common Cause, a partisan gerrymandering case that deals with North Carolina’s 2016 congressional map. North Carolina has endured several map-related fiascos, a nightmare that began with its 2011 map established by the most recent decennial census. Soon after the map was finalized, it was challenged under the Voting Rights Act of 1965, which prohibits racial gerrymandering. Fast forward five years, and Senator Robert Rucho summoned an expert to create a new map—one that was almost instantly disputed by Common Cause, the North Carolina Democratic Party, and the League of Women Voters. These lawsuits were consolidated into a single case and filed in the United States District Court for the Middle District of North Carolina. The three-judge panel ruled for the challengers, affirming their standing to file the suit and ruling that the map was an unconstitutional partisan gerrymander. Rucho then appealed to the Supreme Court.
Heard recently alongside Lamone v. Benisek, another partisan gerrymandering case, Rucho v. Common Cause is now being considered by the Supreme Court. The appellees’ arguments are similar to those of other notable anti-gerrymandering advocates, for they all agree that the 14th Amendment’s Equal Protection Clause prohibits maps from favoring one political party—and that it’s inherently unfair to skew an election. However, the real significance of Rucho is that the appellees have introduced a new calculation for measuring the extent of partisan gerrymandering called the Partisan Bias Measurement (PBM). For decades the justices have searched for a bright-line rule to determine when gerrymandering is unacceptable, and the PBM, which measures deviations from the norm, is a viable option.
The term “gerrymandering” was coined in 1812 after Massachusetts Governor Elbridge Gerry approved a salamander-shaped congressional district, thereby forcing a win for the Democratic-Republicans. Gerrymandering is meant to secure a party win through the two methods of “cracking” and “packing”: 1) “cracking” splits voters of a certain demographic into many different districts so they have trouble winning, while 2) “packing” sacrifices a few districts by stuffing them with one demographic—e.g. African Americans—allowing other districts to gain unnatural power. Racial gerrymandering was deemed unconstitutional in the late 1990s, as the Supreme Court based its decision on the amended Voting Rights Act and Equal Protection Clause of the 13th Amendment. But now, voters and courts are grappling with partisan gerrymandering.
There are two main ways to measure partisan gerrymandering: the efficiency gap calculation (EG) and the PBM. The efficiency gap works by tallying the number of “wasted votes” in an election, which are the excess number of votes the winning party received and the number of votes the losing party had. The difference between these two sets of wasted votes per district is then taken and divided by the total number of votes in the election. The resulting percentage shows how much one party is favored, and anything above 8–10% is deemed an unfair partisan gerrymander. While the efficiency gap’s main flaw is that a deeply partisan state can yield a high EG without necessarily being gerrymandered, that issue can be solved by comparing the efficiency gap to the breakdowns of party affiliation in the state. Today, the efficiency gap is the prevailing method for quantifying gerrymandering.
While the efficiency gap measures wasted votes in relation to the total number of votes cast, the partisan bias measurement creates a model for potential election outcomes. The PBM plots all the potential outcomes of a set of election results on a graph with a 50-50 origin—for example, it plots what happens if 60% of votes are for a Republican and 40% are for a Democrat. This information goes on the x-axis. On the y-axis, the PBM plots all potential outcomes when each party receives 50% of the vote. The numerical result is the difference between the actual percentage of seats controlled and the 50-50 ideal. I have a few concerns about the PBM, as its calculations are somewhat convoluted. It also relies on the debatable assumption that each party should get exactly 50% of seat control. Perhaps the Supreme Court will decide whether this assumption is correct.
It’s a bit troubling that in the North Carolina maps, once the curve moves away from the (50,50) origin, one party seems to control less and less a percentage of seats in relation to the percentage of votes they received. That seems like an issue, because it looks like the creators of the model are saying, “we want you to pay attention to the fact that the curve should pass as close to 50, 50 as possible, but outside of that, we don’t really care about a significant difference between votes and seats.” In other words, nowhere on any of the graphs is the slope actually one. This is a problem because the creators’ fundamental argument about the PBM is that it shows when the percentage of votes matches the percentage of seats. Maybe this is because the calculation accounts for clusters of large relative population in cities, or maybe it just needs a different transformation on one of the axes. Either way, this discrepancy must be addressed.
My last issue with this model is that there is no real threshold for establishing when partisan bias exceeds a certain “unacceptable” amount. Since the PBM is just a difference that is subjectively compared to models from previous elections, there is no set condition. In contrast, the efficiency gap takes gerrymandering more fully into account because it uses the strategies of cracking and packing to look at the number of wasted votes, and anything within a margin of ten points is deemed acceptable. The creators must find that threshold for PBM to be widely used.
Although I have concerns about the PBM, it is imperative for nonpartisan committees and state congresses to measure partisan gerrymandering. The creators of this measurement may have to tweak it according to some of my points or criticism that it receives from the Supreme Court. Nevertheless, I see many positives about the PBM, like the fact that we can finally create a graphic model of gerrymandering; we can evaluate each cycle’s level of gerrymandering in relation to previous election cycles; and we can use big data and computing power to model theoretical outcomes. It is likely that the Court will discard the partisan bias measurement, as they seem averse to statistical models of gerrymandering, but it is an interesting discussion nonetheless.