Math’s war on Gerrymandering

Read an article the other day in MIT Technology Review, Mathematicians are deploying algorithms to stop gerrymandering, that discussed how a bunch of mathematicians had created tools (Python and R applications, links below) which can be used to test whether state redistricting maps are fair or not.

The best introduction to what these applications can do is in a 2021 I. E. Block Community Lecture: Jonathan Christopher Mattingly captured on YouTube video.

For those outside the states, US Congress House of Representatives are elected via state districts. The term gerrymandering was coined in 1812 over a district map created for Massachusetts that took the form of a dragon which all but assured that the district would go Democrat-Republican vs. Federalist in the next election, (source: Wikipedia entry on Gerrymandering). This re-districting process is done every 10 years in every stateafter the US Census Bureau releases their decennial census results which determines the number of representatives that each state elects to the US House of Representatives.

Funny thing about gerrymandering, both the Democrats and the Republicans have done this in the past and will most likely do so in the future. Once district maps are approved they typically stay that way until the next census.

Essentially, what the mathematicians have done is create a way togenerate a vast number of district maps, under a specific series of constraints/guidelines and then can use these series of maps to characterize the democrat-republican split of a trial election based on some recent election result.

One can see in the above show the # of Democrats that would have been elected using the data from the 2012 and 2016 election results under four distinct maps vs a histogram representing the distribution of all the maps the mathematician’s systems created. The four specific maps indicated on the histograms are.

  • NC2012- thrown out by the state courts as being unfair
  • NC2016– one that NC legislature came out with also thrown out as being unfair as it’s equivalent to the NC2012 map
  • Bipartisan Judges – one that a group of independent judges came out with, and
  • Remedial=NC2020 – one submitted by the mathematicians which they deemed “fairer”
These North Carolina congressional district maps illustrate how geometry is not a fail-safe indicator of gerrymandering. The NC 2012 map, with its bizarre district boundaries, was deemed by the courts to be a racial gerrymander. The replacement, the NC 2016 map, looks quite different and tame by comparison, but was deemed to be an unconstitutional political gerrymander. Analysis by Duke’s Jonathan Mattingly and his team showed that the 2012 and 2016 maps were politically equivalent in their partisan outcomes. A court-appointed expert drew the NC 2020 map.

The algorithms (in Python, GitHub repo for GerryChain and R, GitHub repo for redist) take as input an election map which is a combined US census blocks (physical groupings of population defined by US Census Bureau) and some prior election results (that provide the democrat-republican votes for a particular election within those census blocks). And using this data and a list of districting constraints, such as, compactness requirements, minimal breaks of counties (state political units), population equivalence, etc. and using these inputs, the apps generate a multitude or ensemble of district maps for a state. FYI, districting constraints differ from state to state.

Once they have this ensemble of district maps that adhere to the states specified districting constraints one can compare any sample districting map to the histogram and see if a specific map is fair or not. “Fairness” means that it would result in the same Democrat-Republican split that occurred from the highest number of ensemble maps.

The latest process by which districting maps can be created is documented in a research article, Recombination: A Family of Markov Chains for Redistricting. But most of the prior generations all seem to use a tree structure together with a markov chain approach. At the leaves of the tree are the census blocks and the tree hierarchy algorithmically represents the different districting hierarchies.

Presumably, a Markov Chain encodes a method to represent the state’s districting constraints. And what the algorithm does is traverse the tree of census blocks, using the markov chain and randomness, to create district maps by splitting a branch of the tree (=district) off somewhere in the hierarchy, above the census blocks.

Doing this randomly, over a number of iterations, provides a group or ensemble of proper districting maps that can be used to build the histogram for a specific election result. (Suggest reading the above research report for more information on how this works).

One can see the effect of different election results on the distribution of election results that would have occurred with the current ensemble of maps. For example, USH12 uses the census block voting results from the North Carolina, US (Congressional) House election of 2012 and the GOV16 uses the results from the North Carolina Governor election of 2016

It’s somewhat surprising that the US Supreme Court has ruled that districting is a states issue and not subject to constitutional oversight. Not sure I agree but I’m no constitutional scholar/lawyer. So, all of the legal disputes surrounding state’s re-districting maps have been accomplished in state courts.

But what the mathematicians have done is provide the tools needed to create a multitude of districting maps and when one uses prior election results at the census block level, one can see whether any new re-districting map is representative of what one would see if one drew 100s or 1000s of proper redistricting maps.

Let’s hope this all leads to fairer state and federal elections in the future.

Comments?

Photo Credits: