|
|
An important challenge for survey research
is the elicitation of truthful answers to sensitive questions (e.g.,
racial prejudice, drug use, sexual behavior). In this project, I
develop new statistical methods for survey methodology that can be
used to achieve this goal. The first survey methodology is the item
count technique or list experiment where respondents are asked to
provide the total number of items on a list to which they answer
affirmatively rather than to answer each item separately. For the
randomly selected control group, the list only includes non-sensitive
items. For the treatment group, the list contains a sensitive
question in addition to these non-sensitive items. I show how to
conduct a multivariate analysis in this setting by developing new
estimators and applying them to the measurement of racial prejudice in
the United States.
The second survey methodology I study is the endorsement experiment which is used to measure the levels of support for certain political actors. Here, respondents are asked to express their opinion about a particular policy endorsed by a randomly selected political actor. These responses are then contrasted with those from a control group that receives no endorsement. I show how to analyze such survey experiments by developing a Bayesian hierarchical measurement model. The proposed model uses item response theory to estimate support levels. I apply this methodology to recent survey experiments in Afghanistan and Pakistan in order to measure spatial variation in citizens' attitudes towards combatants and Islamist militant groups, respectively. The third survey methodology is the randomized response method. This survey methodology asks respondents to use a randomization device, such as a coin flip, whose outcome is unobserved by the enumerator. By introducing random noise, the method conceals individual responses and consequently protects respondent privacy. In this project, I review standard designs available to applied researchers, develop various multivariate regression techniques for substantive analyses, propose power analyses to help improve research designs, present new robust designs that are based on less stringent assumptions than those of the standard designs, and make all described methods available through open-source software. I illustrate some of these methods with an original survey about militant groups in Nigeria. |
Papers that develop methods: |
Bullock, Will, Kosuke Imai, and Jacob
Shapiro. (2011). ``Statistical Analysis of
Endorsement Experiments: Measuring Support for Militant Groups in
Pakistan.'' Political Analysis, Vol. 19,
No. 4 (Autumn), pp. 363-384. (lead article)
|
Imai, Kosuke. (2011). ``Multivariate Regression Analysis
for the Item Count Technique.'' Journal of the
American Statistical Association, Vol. 106, No. 494 (June),
pp. 407-416. (featured article)
|
Blair, Graeme and Kosuke Imai. (2012). ``Statistical Analysis of
List Experiments.'' Political Analysis,
Vol. 20, No. 1 (Winter), pp. 47-77. |
Blair, Graeme, Kosuke Imai, and Jason
Lyall. (2014). ``Comparing and Combining List and
Endorsement Experiments: Evidence from
Afghanistan.'' American Journal of Political
Science, Vol. 58, No. 4 (October),
pp. 1043-1063. |
Imai, Kosuke, Bethany Park, and Kenneth
Greene. (2015). ``Using the Predicted Responses
from List Experiments as Explanatory Variables in Regression
Models.'' Political Analysis, Vol. 23, No. 2
(Spring), pp. 180-196. Translated in Portuguese and Reprinted in
Revista Debates Vol. 9, No 1. |
Blair, Graeme, Kosuke Imai, and Yang-Yang
Zhou. (2015). ``Design and Analysis of the
Randomized Response Technique.'' Journal of the
American Statistical Association, Vol. 110, No. 511
(September), pp. 1304-1319. |
Blair, Graeme, Winston Chou, and Kosuke
Imai. (2019). ``List Experiments with
Measurement Error.'' Political Analysis,
Vol. 27, No. 4 (October), pp. 455-480. |
Chou, Winston, Kosuke Imai, and Bryn
Rosenfeld. (2020). ``Sensitive Survey Questions with
Auxiliary Information.'' Sociological Methods &
Research, Vol. 49, No. 2 (May), pp. 418-454. |
Paper that empirically validates
the methods: |
Rosenfeld, Bryn, Kosuke Imai, and Jacob
Shapiro. (2016). ``An
Empirical Validation Study of Popular Survey Methodologies for
Sensitive Questions.'' American Journal of Political
Science, American Journal of Political Science, Vol. 60,
No. 3 (July), pp. 783-802. |
Papers that describe applications: |
Lyall, Jason, Graeme Blair, and Kosuke
Imai. (2013). ``Explaining Support for
Combatants during Wartime: A Survey Experiment in
Afghanistan.'' American Political Science
Review, Vol. 107, No. 4 (November), pp. 679-705. Winner of
the Pi Sigma Alpha Award. |
Lyall, Jason, Kosuke Imai, and Yuki
Shiraito. (2015). ``Coethnic Bias and Wartime
Informing.'' Journal of Politics, Vol. 77,
No. 3 (July), p. 833-848. |
Hirose, Kentaro, Kosuke Imai, and Jason
Lyall. (2017). ``Can
Civilian Attitudes Predict Insurgent Violence?: Ideology and
Insurgent Tactical Choice in Civil War.'' Journal
of Peace Research, Vol. 51, No. 1 (January), pp. 47-63.
Winner of the Nils Petter Gleditsch Article of the Year
Award. Story
by Princeton's communication office. |
Shiraito, Yuki, and Kosuke Imai. ``endorse: R Package for
Analyzing Endorsement Experiments.'' available
through The
Comprehensive R Archive Network and GitHub. 2012-2014. |
Blair, Graeme, and Kosuke Imai. ``list: Statistical Methods for
the Item Count Technique and List Experiment..''
available through The
Comprehensive R Archive Network and GitHub. 2011-2014. |
Blair, Graeme, Yang-Yang Zhou, and
Kosuke Imai. ``rr: Statistical
Methods for the Randomized Response Technique.''
available through The Comprehensive R
Archive Network and GitHub. 2015. |
See the Some examples of list experiments
|
Presentation slides that are used to
describe this project at the 2010 Summer Political Methodology Conference are available for download at this link. |