``Principal Fairness for Human and Algorithmic Decision-Making.''

 

  Abstract

Using the concept of principal stratification from the causal inference literature, we introduce a new notion of fairness, called principal fairness, for human and algorithmic decision-making. The key idea is that one should not discriminate among individuals who would be similarly affected by the decision. Unlike the existing statistical definitions of fairness, principal fairness explicitly accounts for the fact that individuals can be influenced by the decision. We introduce an axiomatic assumption that all groups are created equal once we account for relevant covariates. This assumption is motivated by a belief that protected attributes such as race and gender should not directly affect potential outcomes. Under this assumption, we show that principal fairness implies all three existing statistical fairness criteria, thereby resolving the previously recognized tradeoffs between them. Finally, we discuss how to empirically evaluate the principal fairness of a particular decision and the relationships between principal and counterfactual fairness criteria. (Last updated in September 2020)

  Related Paper

Imai, Kosuke, Zhichao Jiang, D. James Greiner, Ryan Halen, and Sooahn Shin. ``Experimental Evaluation of Computer-Assisted Human Decision-Making: Application to Pretrial Risk Assessment Instrument.''

© Kosuke Imai
 Last modified: Mon Sep 28 21:41:30 EDT 2020