Measurement
error threatens the validity of survey research, especially when
studying sensitive questions. Although list experiments can help
discourage deliberate misreporting, they may also suffer from
nonstrategic measurement error due to flawed implementation and
respondents’ inattention. Such error runs against the assumptions of
the standard maximum likelihood regression (MLreg) estimator for
list experiments and can result in misleading inferences, especially
when the underlying sensitive trait is rare. We address this problem
by providing new tools for diagnosing and mitigating measurement
error in list experiments. First, we demonstrate that the nonlinear
least squares regression (NLSreg) estimator proposed in
Imai (2011) is robust to
nonstrategic measurement error. Second, we offer a general model
misspecification test to gauge the divergence of the MLreg and
NLSreg estimates. Third, we show how to model measurement error
directly, proposing new estimators that preserve the statistical
efficiency of MLreg while improving robustness. Last, we revisit
empirical studies shown to exhibit nonstrategic measurement error,
and demonstrate that our tools readily diagnose and mitigate the
bias. We conclude this article with a number of practical
recommendations for applied researchers. The proposed methods are
implemented through an
software
package.