A New Automated Redistricting Simulator Using Markov Chain Monte Carlo



Legislative redistricting is a critical element of representative democracy. A number of political scientists have used simulation methods to sample redistricting plans under various constraints in order to assess their impact on partisanship and other aspects of representation. However, while many optimization algorithms have been proposed, surprisingly few simulation methods exist in the literature. Furthermore, the standard algorithm has no theoretical justification, scales poorly, and is unable to incorporate fundamental substantive constraints required by redistricting processes in the real world. To fill this gap, we formulate redistricting as a graph-cut problem and for the first time in the literature propose a new automated redistricting simulator based on Markov chain Monte Carlo. We show how this algorithm can incorporate various constraints including equal population, geographical compactness, and status quo biases. Finally, we apply simulated and parallel tempering to improve the mixing of the resulting Markov chain. Through a small-scale validation study, we show that the proposed algorithm can accurately approximate a target distribution while outperforming the standard algorithm in terms of speed. We also apply the proposed methodology to the data from New Hampshire and Pennsylvania and show that our algorithm can be applied to real-world redistricting problems. The open-source software is available for implementing the proposed methodology. (Last Revised January, 2018)

© Kosuke Imai
 Last modified: Sat Jan 20 18:11:34 EST 2018