|
|
Algorithmic and data-driven decisions and
recommendations are commonly used in high-stakes decision-making
settings such as criminal justice, medicine, and public policy. We
investigate whether it would have been possible to improve a
security assessment algorithm employed during the Vietnam War, using
outcomes measured immediately after its introduction in late
1969. This empirical application raises several methodological
challenges that frequently arise in high-stakes algorithmic
decision-making. First, before implementing a new algorithm, it is
essential to characterize and control the risk of yielding worse
outcomes than the existing algorithm. Second, the existing algorithm
is deterministic, and learning a new algorithm requires transparent
extrapolation. Third, the existing algorithm involves discrete
decision tables that are common but difficult to optimize over.
To address these challenges, we introduce the Average Conditional Risk
(ACRisk), which first quantifies the risk that a new algorithmic
policy leads to worse outcomes for subgroups of individual units and
then averages this over the distribution of subgroups. We also propose
a Bayesian policy learning framework that maximizes the posterior
expected value while controlling the posterior expected ACRisk. This
framework separates the estimation of heterogeneous treatment effects
from policy optimization, enabling flexible estimation of effects and
optimization over complex policy classes. We characterize the
resulting chance-constrained optimization problem as a constrained
linear programming problem. Our analysis shows that compared to the
actual algorithm used during the Vietnam War, the learned algorithm
assesses most regions as more secure and emphasizes economic and
political factors over military factors. |
Imai, Kosuke, Zhichao Jiang, D. James
Greiner, Ryan Halen, and Sooahn Shin. (2023). ``Experimental Evaluation of Algorithm-Assisted
Human Decision-Making: Application to Pretrial Public Safety
Assessment.'' (with discussion) Journal of the
Royal Statistical Society, Series A (Statistics in Society),
Vol. 186, No. 2 (April), pp. 167-189. Read before the Royal
Statistical Society. |
Zhang, Yi, Eli Ben-Michael, and Kosuke
Imai. ``Safe Policy Learning under
Regression Discontinuity Designs with Multiple Cutoffs..''
|
Ben-Michael, Eli, D. James Greiner, Kosuke
Imai, and Zhichao Jiang. ``Safe
Policy Learning through Extrapolation: Application to Pre-trial
Risk Assessment..'' |
Imai, Kosuke and Zhichao Jiang. (2023). ``Principal Fairness for Human and
Algorithmic Decision-Making.'' Statistical
Science, Vol. 38, No. 2 (July), pp317-328. |
Ben-Michael, Eli, Kosuke Imai, and Zhichao
Jiang. ``Policy Learning with
Asymmetric Counterfactual Utilities.'' |