The regression discontinuity (RD) design
is widely used for program evaluation with observational data. The
RD design enables the identification of the local average treatment
effect (LATE) at the treatment cutoff by exploiting known
deterministic treatment assignment mechanisms. The primary focus of
the existing literature has been the development of rigorous
estimation methods for the LATE. In contrast, we consider policy
learning under the RD design. We develop a robust optimization
approach to finding an optimal treatment cutoff that improves upon
the existing one. Under the RD design, policy learning requires
extrapolation. We address this problem by partially identifying the
conditional expectation function of counterfactual outcome under a
smoothness assumption commonly used for the estimation of LATE. We
then minimize the worst case regret relative to the status quo
policy. The resulting new treatment cutoffs have a safety guarantee,
enabling policy makers to limit the probability that they yield a
worse outcome than the existing cutoff. Going beyond the standard
single-cutoff case, we generalize the proposed methodology to the
multi-cutoff RD design by developing a doubly robust estimator. We
establish the asymptotic regret bounds for the learned policy using
semi-parametric efficiency theory. Finally, we apply the proposed
methodology to empirical and simulated data sets. (Last updated in
August 2022) |