Published Papers

Here I list my published papers (with abstracts). Articles should be downloaded for personal use only. If you click on the paper title, you will be taken to a PDF version of the manuscript. You will need Acrobat to read the files, available free from Adobe.


Misinformation

Technique-Based Inoculation and Accuracy Prompts Must Be Combined to Increase Truth Discernment. Nature Human Behaviour. Forthcoming. (with Puneet Bhargava , Rocky Cole, Beth Goldberg, Stephen Lewandowsky, Hause Lin, Gordon Pennycook, and David G. Rand).

Toolbox of Individual-Level Interventions against Online Misinformation. Nature Human Behaviour. Forthcoming. (with 29 other authors).

How to Think About Whether Misinformation Interventions WorkNature Human Behaviour. 2023. (with Brian Guay, Gordon Pennycook, and David G. Rand).
Recent years have seen a proliferation of experiments seeking to combat misinformation. Yet there has been little consistency across studies in how the effectiveness of interventions is evaluated, which undermines the field’s ability to identify efficacious strategies. We provide a framework for differentiating between common research designs on the basis of the normative claims they make about how people should interact with information. We recommend an approach that aligns with the normative claim that citizens should maximize the accuracy of the content they believe and share, which requires (i) a design that includes both true and false content, and (ii) an analysis that includes examining discernment between the two. Using data from recent misinformation studies, we show that using the wrong research design can lead to misleading conclusions about who is most likely to spread misinformation and how to stop it.

Understanding and Combating Misinformation Across 16 Countries on Six ContinentsNature Human Behaviour. 2023. (with Antonio A. Arechar, Jennifer Allen, Rocky Cole, Ziv Epstein, Kiran Garimella, Andrew Gully, Jackson G. Lu, Robert M. Ross, Michael N. Stagnaro, Yunhao Zhang, Gordon Pennycook, and David G. Rand).
The spread of misinformation online is a global problem that requires global solutions. To that end, we conducted an experiment in 16 countries across 6 continents (N = 34,286; 676,605 observations) to investigate predictors of susceptibility to COVID-19 misinformation, and interventions to combat the spread of COVID-19 misinformation. In every country, participants with a more analytic cognitive style and stronger accuracy-related motivations were better at discerning truth from falsehood; valuing democracy was also associated with greater truth discernment whereas endorsement of individual responsibility over government support was negatively associated with truth discernment in most countries. Subtly prompting people to think about accuracy was broadly effective at improving the veracity of news that people were willing to share, as were minimal digital literacy tips. Finally, aggregating the ratings of our non-expert participants was able to differentiate true from false headlines with high accuracy in all countries via the ‘wisdom of crowds’. The consistent patterns we observe suggest that the psychological factors underlying the misinformation challenge are similar across different regional settings, and that similar solutions may be broadly effective.

Rumors in Retweet: Ideological Asymmetry in the Failure to Correct MisinformationPersonality and Social Psychology Bulletin. 2022. (with Matthew R. DeVerna, Andrew M. Guess, Joshua A. Tucker and John T. Jost ).
We used supervised machine-learning techniques to examine ideological asymmetries in online rumor transmission. Although liberals were more likely than conservatives to communicate in general about the 2013 Boston Marathon bombings (Study 1, N = 26,422) and 2020 death of the sex trafficker Jeffrey Epstein (Study 2, N = 141,670), conservatives were more likely to share rumors. Rumor-spreading decreased among liberals following official correction, but it increased among conservatives. Marathon rumors were spread twice as often by conservatives pre-correction, and nearly 10 times more often postcorrection. Epstein rumors were spread twice as often by conservatives pre-correction, and nearly, eight times more often post-correction. With respect to ideologically congenial rumors, conservatives circulated the rumor that the Clinton family was involved in Epstein’s death 18.6 times more often than liberals circulated the rumor that the Trump family was involved. More than 96% of all fake news domains were shared by conservative Twitter users.

Emotion May Predict Susceptibility to Fake News but Emotion Regulation Does Not Seem to HelpCognition and Emotion. 2022. (with Bence Bago, Leah Rosenzweig, and David G. Rand).
Misinformation is a serious concern for societies across the globe. To design effective interventions to combat the belief in and spread of misinformation, we must understand which psychological processes influence susceptibility to misinformation. This paper tests the widely assumed — but largely untested — claim that people are worse at identifying true versus false headlines when the headlines are emotionally provocative. Consistent with this proposal, we found correlational evidence that overall emotional response at the headline level is associated with diminished truth discernment, except for experienced anger which was associated with increased truth discernment. A second set of studies tested a popular emotion regulation intervention where people were asked to apply either emotional suppression or emotion reappraisal techniques when considering the veracity of several headlines. In contrast to the correlation results, we found no evidence that emotion regulation helped people distinguish false from true news headlines.

Happiness and Surprise Are Associated with Worse Truth Discernment of COVID-19 Headlines Among Social Media Users in NigeriaHarvard Kennedy School Misinformation Review. 2021. (with Bence Bago, Leah Rosenzweig, and David G. Rand).
Do emotions we experience after reading headlines help us discern true from false information or cloud our judgement? Understanding whether emotions are associated with distinguishing truth from fiction and sharing information has implications for interventions designed to curb the spread of misinformation. Among 1,341 Facebook users in Nigeria, we find that emotions—specifically happiness and surprise—are associated with greater belief in and sharing of false, relative to true, COVID-19 headlines. Respondents who are older are more reflective, and do not support the ruling party are better at discerning true from false COVID-19 information. 

Exploring Lightweight Interventions at Posting Time to Reduce the Sharing of Misinformation on Social MediaProceedings of the 24th ACM Conference on Computer – Supported Cooperative Work and Social Computing. 2021. (with Farnaz Jahanbakhsh, Amy X. Zhang, Gordon Pennycook, David G. Rand, and David R. Karger).
When users on social media share content without considering its veracity, they may unwittingly be spreading misinformation. In this work, we investigate the design of lightweight interventions that nudge users to assess the accuracy of information as they share it. Such assessment may deter users from posting misinformation in the first place, and their assessments may also provide useful guidance to friends aiming to assess those posts themselves.
When users on social media share content without considering its veracity, they may unwittingly be spreading misinformation. In this work, we investigate the design of lightweight interventions that nudge users to assess the accuracy of information as they share it. Such assessment may deter users from posting misinformation in the first place, and their assessments may also provide useful guidance to friends aiming to assess those posts themselves.
In support of lightweight assessment, we first develop a taxonomy of the reasons why people believe a news claim is or is not true; this taxonomy yields a checklist that can be used at posting time. We conduct evaluations to demonstrate that the checklist is an accurate and comprehensive encapsulation of people’s free-response rationales.
In a second experiment, we study the effects of three behavioral nudges—1) checkboxes indicating whether headings are accurate, 2) tagging reasons (from our taxonomy) that a post is accurate via a checklist and 3) providing free-text rationales for why a headline is or is not accurate—on people’s intention of sharing the headline on social media. From an experiment with 1668 participants, we find that both providing accuracy assessment and rationale reduce the sharing of false content. They also reduce the sharing of true content, but to a lesser degree that yields an overall decrease in the fraction of shared content that is false.
Our findings have implications for designing social media and news sharing platforms that draw from richer signals of content credibility contributed by users. In addition, our validated taxonomy can be used by platforms and researchers as a way to gather rationales in an easier fashion than free-response.

Developing an Accuracy-Prompt Toolkit to Reduce COVID-19 Misinformation OnlineHarvard Kennedy School Misinformation Review. 2021. (with Ziv Epstein, Rocky Cole, Andrew Gully, Gordon Pennycook, and David G. Rand).
Recent research suggests that shifting users’ attention to accuracy increases the quality of news they subsequently share online. Here we help develop this initial observation into a suite of deployable interventions for practitioners. We ask (i) how prior results generalize to other approaches for prompting users to consider accuracy, and (ii) for whom these prompts are more versus less effective. In a large survey experiment examining participants’ intentions to share true and false headlines about COVID-19, we identify a variety of different accuracy prompts that successfully increase sharing discernment across a wide range of demographic subgroups while maintaining user autonomy. 

Timing Matters When Correcting Fake NewsProceedings of the National Academy of Sciences. 2021.
Countering misinformation can reduce belief in the moment, but corrective messages quickly fade from memory. We tested whether the longer-term impact of fact-checks depends on when people receive them. In two experiments (total N = 2,683), participants read true and false headlines taken from social media. In the treatment conditions, “true” and “false” tags appeared before, during, or after participants read each headline. Participants in a control condition received no information about veracity. One week later, participants in all conditions rated the same headlines’ accuracy. Providing fact-checks after headlines (debunking) improved subsequent truth discernment more than providing the same information during (labeling) or before (prebunking) exposure. This finding informs the cognitive science of belief revision and has practical implications for social media platform designers.

They Might be a Liar but They’re My Liar: Source Evaluation and the Prevalence of Misinformation Political Psychology. 2020. (With Briony Swire-Thompson, Ullrich Ecker, and Stephen Lewandowsky).
Even if people acknowledge that misinformation is incorrect after a correction has been presented, their feelings towards the source of the misinformation can remain unchanged. The current study investigated whether participants reduce their support of Republican and Democratic politicians when the prevalence of misinformation disseminated by the politicians appears to be high in comparison to the prevalence of their factual statements. We presented U.S. participants either with (1) equal numbers of false and factual statements from political candidates or (2) disproportionately more false than factual statements. Participants received fact-checks as to whether items were true or false, then rerated both their belief in the statements as well as their feelings towards the candidate. Results indicated that when corrected misinformation was presented alongside equal presentations of affirmed factual statements, participants reduced their belief in the misinformation but did not reduce their feelings towards the politician. However, if there was considerably more misinformation retracted than factual statements affirmed, feelings towards both Republican and Democratic figures were reduced—although the observed effect size was extremely small.

Does Truth Matter to Voters? The Effects of Correcting Political Misinformation in an Australian Sample Royal Society Open Science. 2018. (With Michael Aird, Briony Swire-Thompson, Ullrich Ecker, and Stephen Lewandowsky).
In the ‘post-truth era’, political fact-checking has become an issue of considerable significance. A recent study in the context of the 2016 US election found that fact-checks of statements by Donald Trump changed participants’ beliefs about those statements—regardless of whether participants supported Trump—but not their feelings towards Trump or voting intentions. However, the study balanced corrections of inaccurate statements with an equal number of affirmations of accurate statements. Therefore, the null effect of factchecks on participants’ voting intentions and feelings may have arisen because of this artificially created balance. Moreover, Trump’s statements were not contrasted with statements from an opposing politician, and Trump’s perceived veracity was not measured. The present study (N ¼ 370) examined the issue further, manipulating the ratio of corrections to affirmations, and using Australian politicians (and Australian participants) from both sides of the political spectrum. We hypothesized that fact-checks would correct beliefs and that fact-checks would affect voters’ support (i.e. voting intentions, feelings and perceptions of veracity), but only when corrections outnumbered affirmations. Both hypotheses were supported, suggesting that a politician’s veracity does sometimes matter to voters. The effects of fact-checking were similar on both sides of the political spectrum, suggesting little motivated reasoning in the processing of fact-checks.

The Science of Fake NewsScience. 2018. 359(6380): 1094-1096. (With David M. J. Lazer, Matthew A. Baum, Yochai Benkler, Kelly M. Greenhill, Filippo Menczer, Brendan Nyhan, Miriam J. Metzger, Gordon Pennycook, David Rothschild, Michael Schudson, Steven A. Sloman, Cass R. Sunstein, Emily A. Thorson, Duncan J. Watts, and Jonathan L. Zittrain).
The rise of fake news highlights the erosion of long-standing institutional bulwarks against misinformation in the internet age. Concern over the problem is global. However, much remains unknown regarding the vulnerabilities of individuals, institutions, and society to manipulations by malicious actors. A new system of safeguards is needed. Below, we discuss extant social and computer science research regarding belief in fake news and the mechanisms by which it spreads. Fake news has a long history, but we focus on unanswered scientific questions raised by the proliferation of its most recent, politically oriented incarnation. Beyond selected references in the text, suggested further reading can be found in the supplementary materials.

Telling the Truth about Believing the Lies? The Prevalence of Expressive Responding in Surveys Journal of Politics. 2018. 80(1): 2011-224.
Large numbers of Americans endorse political rumors on surveys. But do they truly believe what they say? In this paper, I assess the extent to which subscription to political rumors represents genuine beliefs as opposed to expressive responses— rumor endorsements designed to express opposition to politicians and policies rather than genuine belief in false information. I ran several experiments, each designed to reduce expressive responding on two topics: among Republicans on the question of whether Barack Obama is a Muslim and among Democrats on whether members of the federal government had advance knowledge about 9/11. The null results of all experiments lead to the same conclusion: the incidence of expressive responding is very small, though somewhat larger for Democrats than Republicans. These results suggest that survey responses serve as a window into the underlying beliefs and true preferences of the mass public.

Processing Political Misinformation: Comprehending the Trump Phenomenon Royal Society Open Science. 2017. 1-21. (With Briony Swire, Stephan Lewandowsky, and Ullrich K.H. Ecker).
This study investigated the cognitive processing of true and false political information. Specifically, it examined the impact of source credibility on the assessment of veracity when information comes from a polarizing source (Experiment 1), and effectiveness of explanations when they come from one’s own political party or an opposition party (Experiment 2). These experiments were conducted prior to the 2016 Presidential election. Participants rated their belief in factual and incorrect statements that President Trump made on the campaign trail; facts were subsequently affirmed and misinformation retracted. Participants then re-rated their belief immediately or after a delay. Experiment 1 found that (i) if information was attributed to Trump, Republican supporters of Trump believed it more than if it was presented without attribution, whereas the opposite was true for Democrats and (ii) although Trump supporters reduced their belief in misinformation items following a correction, they did not change their voting preferences. Experiment 2 revealed that the explanation’s source had relatively little impact, and belief updating was more influenced by perceived credibility of the individual initially purporting the information. These findings suggest that people use political figures as a heuristic to guide evaluation of what is true or false, yet do not necessarily insist on veracity as a prerequisite for supporting political candidates.

Rumors and Health Care Reform: Experiments in Political Misinformation. British Journal of Political Science. 2017. 47(2): 241-262.
This article explores belief in political rumors surrounding the health care reforms enacted by Congress in 2010. Refuting rumors with statements from unlikely sources can, under certain circumstances, increase the willingness of citizens to reject rumors regardless of their own political predilections. Such source credibility effects, while well known in the political persuasion literature, have not been applied to the study of rumor. Though source credibility appears to be an effective tool for debunking political rumors, risks remain. Drawing upon research from psychology on ‘fluency’ – the ease of information recall – this article argues that rumors acquire power through familiarity. Attempting to quash rumors through direct refutation may facilitate their diffusion by increasing fluency. The empirical results find that merely repeating a rumor increases its power.

Media and Political Persuasion

Media Measurement Matters: Estimating the Persuasive Effects of Partisan Media with Survey and Behavioral DataJournal of Politics. 2023. (with Chloe Wittenberg, Matthew Baum, Justin de Benedictis-Kessner and Teppei Yamamoto).
To what extent do partisan media influence political attitudes and behavior? Although recent methodological advancements have improved scholars’ ability to identify the persuasiveness of partisan media, past studies typically rely on self-reported measures of media preferences, which may deviate from real-world news consumption. Integrating individual-level web-browsing data with a survey experiment, we contrast survey-based indicators of stated preferences with behavioral measures of revealed preferences, based on the relative volume and slant of news individuals consume. Overall, we find that these measurement strategies generate differing conclusions regarding heterogeneity in partisan media’s persuasive impact. Whereas our stated preference measure raises the possibility of persuasion by cross-cutting sources, our revealed preference measures suggest that, among consumers with more polarized media diets, partisan media exposure results in limited attitude change, with any observed effects driven primarily by politically concordant sources. Together, these findings underscore the importance of careful measurement for research on media persuasion.

Quantifying the Potential Persuasive Returns to Political MicrotargetingProceedings of the National Academy of Sciences. 2023. (with Chloe Wittenberg, Luke Hewitt, Ben M. Tappin, and David G. Rand).
Much concern has been raised about the power of political microtargeting to sway voters’ opinions, influence elections, and undermine democracy. Yet little research has
directly estimated the persuasive advantage of microtargeting over alternative campaign strategies. Here, we do so using two studies focused on U.S. policy issue advertising. To implement a microtargeting strategy, we combined machine learning with message pretesting to determine which advertisements to show to which individuals to maximize persuasive impact. Using survey experiments, we then compared the performance of this microtargeting strategy against two other messaging strategies. Overall, we estimate that our microtargeting strategy outperformed these strategies by an average of 70% or more in a context where all of the messages aimed to influence the same policy attitude (Study 1). Notably, however, we found no evidence that targeting messages by more than one covariate yielded additional persuasive gains, and the performance advantage of microtargeting was primarily visible for one of the two policy issues under study. Moreover, when microtargeting was used instead to identify which policy attitudes to target with messaging (Study 2), its advantage was more limited. Taken together, these results suggest that the use of microtargeting—combining message pretesting with machine learning—can potentially increase campaigns’ persuasive influence and may not require the collection of vast amounts of personal data to uncover complex interactions between audience characteristics and political messaging. However, the extent to which this approach confers a persuasive advantage over alternative strategies likely depends heavily on context.

Partisans’ Receptivity to Persuasive Messaging is Undiminished by Countervailing Party Leader CuesNature Human Behaviour. 2023. (with Ben M. Tappin, and David G. Rand).
It is widely assumed that party identification and loyalty can distort partisans’ information processing, diminishing their receptivity to counter-partisan arguments and evidence. Here we empirically evaluate this assumption. We test whether American partisans’ receptivity to arguments and evidence is diminished by countervailing cues from in-party leaders (Donald Trump or Joe Biden), using a survey experiment with 24 contemporary policy issues and 48 persuasive messages containing arguments and evidence (N = 4,531; 22,499 observations). We find that, while in-party leader cues influenced partisans’ attitudes, often more strongly than the persuasive messages, there was no evidence that the cues meaningfully diminished partisans’ receptivity to the messages—despite them directly contradicting the messages. Rather, persuasive messages and countervailing leader cues were integrated as independent pieces of information. These results generalized across policy issues, demographic subgroups and cue environments, and challenge existing assumptions about the extent to which party identification and loyalty distort partisans’ information processing.

The (Minimal) Persuasive Advantage of Political Video over TextProceedings of the National Academy of Sciences. 2021. (with Chloe Wittenberg, Ben M. Tappin, and David G. Rand).
Concerns about video-based political persuasion are prevalent in both popular and academic circles, predicated on the assumption that video is more compelling than text. To date, however, this assumption remains largely untested in the political domain. Here, we provide such a test. We begin by drawing a theoretical distinction between two dimensions for which video might be more efficacious than text: 1) one’s belief that a depicted event actually occurred and 2) the extent to which one’s attitudes and behavior are changed. We test this model across two high-powered survey experiments varying exposure to politically persuasive messaging (total n = 7,609 Americans; 26,584 observations). Respondents were shown a selection of persuasive messages drawn from a diverse sample of 72 clips. For each message, they were randomly assigned to one of three conditions: a short video, a detailed transcript of the video, or a control condition. Overall, we find that individuals are more likely to believe an event occurred when it is presented in video versus textual form, but the impact on attitudes and behavioral intentions is much smaller. Importantly, for both dimensions, these effects are highly stable across messages and respondent subgroups. Moreover, when it comes to attitudes and engagement, the difference between the video and text conditions is comparable to, if not smaller than, the difference between the text and control conditions. Taken together, these results call into question widely held assumptions about the unique persuasive power of political video over text.

Design, Identification, and Sensitivity Analysis for Patient Preference Trials Journal of the American Statistical Association. 2019. (With Dean Knox, Matthew Baum and Teppei Yamamoto).
Social and medical scientists are often concerned that the external validity of experimental results may be compromised because of heterogeneous treatment effects. If a treatment has different effects on those who would choose to take it and those who would not, the average treatment effect estimated in a standard randomized controlled trial (RCT) may give a misleading picture of its impact outside of the study sample. Patient preference trials (PPTs), where participants’ preferences over treatment options are incorporated in the study design, provide a possible solution. In this paper, we provide a systematic analysis of PPTs based on the potential outcomes framework of causal inference. We propose a general design for PPTs with multi-valued treatments, where participants state their preferred treatments and are then randomized into either a standard RCT or a self-selection condition. We derive nonparametric sharp bounds on the average causal effects among each choice-based subpopulation of participants under the proposed design. We also propose a sensitivity analysis for the violation of the key ignorability assumption sufficient for identifying the target causal quantity. The proposed design and methodology are illustrated with an original study of partisan news media and its behavioral impact. Supplementary materials for this article, including a standardized description of the materials available for reproducing the work, are available as an online supplement.

Persuading the Enemy: Estimating the Persuasive Effects of Partisan Media with the Preference-Incorporating Choice and Assignment Design American Political Science Review. 2019. 113(4): 902-916. (With Justin de Benedictis-Kessner, Matthew Baum and Teppei Yamamoto).
Does media choice cause polarization, ormerely reflectit?Weinvestigate a critical aspect ofthis puzzle: How partisan media contribute to attitude polarization among different groups of media consumers. We implement a new experimental design, called the Preference-Incorporating Choice and Assignment (PICA) design,thatincorporates both free choice and forced exposure.We estimatejointlythe degree of polarization caused by selective exposure and the persuasive effect of partisan media. Our design also enables us to conduct sensitivity analyses accounting for discrepancies between stated preferences and actual choice, a potential source of bias ignored in previous studies using similar designs. We find that partisan media can polarize both its regular consumers and inadvertent audiences who would otherwise not consume it, but ideologically opposing media potentially also can ameliorate the existing polarization between consumers. Taken together, these results deepen our understanding of when and how media polarize individuals.

Making Sense of Issues through Media Frames: Understanding The Kosovo CrisisJournal of Politics. 2006. 68(3): 640-656 (with Donald Kinder).
How do people make sense of politics? Integrating empirical results in communication studies on framing with models of comprehension in cognitive psychology, we argue that people understand complicated event sequences by organizing information in a manner that conforms to the structure of a good story. To test this claim, we carried out a pair of experiments. In each, we presented people with news reports on the 1999 Kosovo crisis that were framed in story form, either to promote or prevent U.S. intervention. Consistent with expectations, we found that framing news about the crisis as a story affected what people remembered, how they structured what they remembered, and the opinions they expressed on the actions government should take.

The Politics of Immigration and Race

How Social Context Affects Immigration AttitudesJournal of Politics. 2023. (with Chris Karpowitz, Chris Peng, Jonathan Rodden and Cara Wong).
Selection bias represents a persistent challenge to understanding the effects of social context on political attitudes. We attempt to overcome this challenge by focusing on a unique sample of individuals who were assigned to a new social context for an extended period, without control over the location they were sent: missionaries for the Church of Jesus Christ of Latter-day Saints. We interviewed a sample of 1,804 young people before and after their mission service in a diverse set of locations around the world and find strong evidence that the policy views of respondents became more tolerant toward undocumented immigrants when respondents were assigned to places where contact with immigrants was more likely. Within the United States, missionaries who served in communities with larger Hispanic populations, and those assigned to speak a language other than English, experienced the largest increases in pro-immigrant attitudes.

The Effect of Assocative Racial Cues in Elections.  Political Communication. 2020. (with Justin de Benedictis – Kessner , Megan Goldberg, and Michele Margolis).
How do racial signals associating a candidate with minority supporters change voters’ perceptions about a candidate and their support for a candidate? Given the presence of competing information in any campaign or the absence of information in low-salience campaigns, voters may rely on heuristics—such as race—to make the process of voting easier. The information communicated by these signals may be so strong that they cause voters to ignore other, perhaps more politically relevant, information. In this paper, we test how associative racial cues sway voters’ perceptions of and support for candidates using two experiments that harness real-world print and audio campaign advertisements. We find that the signals in these ads can sometimes overwhelm cues about policy positions when the two are present together. Moreover, we find that such signals have limited effects on candidate support among black voters but that they risk substantial backlash of up to eight percentage points in reported vote intention among white voters. Our results highlight how voters gather and use information in low-information elections and demonstrate the power of campaign communication strategies that use racial associations.

Attribute Affinity: U.S. Natives’ Attitudes Towards ImmigrantsPolitical Behavior. 2020. (with Tesalia Rizzo, Leah Rosenzweig, and Elisha Heaps).
We examine the extent to which relevant social identity traits shared between two individuals—what we term “attribute affinity”—can moderate out-group hostility. We argue that in-group affinity is a powerful force in shaping preferences over potential immigrants. We focus on two closely related, yet distinct, dimensions of identity: religion and religiosity. Using evidence from three surveys that included two embedded experiments, we show that sharing strength in religious practice can diminish strong aversion to immigrants of different religious affiliations. We find that, among highly religious U.S. natives, anti-Muslim bias is lower toward very religious Muslims, compared to non-religious Muslims. This attenuating effect of attribute affinity with respect to religiosity on anti-Muslim bias presents the strongest evidence supporting our argument.

Sex and Race: Are Black Candidates More Likely to be Disadvantaged by Sex Scandals? Politicial Behavior. 2011. 33(2):179-202 (with Vincent Hutchings, Tali Mendelberg, Lee Shaker, and Nicholas Valentino).
A growing body of work suggests that exposure to subtle racial cues prompts white voters to penalize black candidates, and that the effects of these cues may influence outcomes indirectly via perceptions of candidate ideology. We test hypotheses related to these ideas using two experiments based on national samples. In one experiment, we manipulated the race of a candidate (Barack Obama vs. John Edwards) accused of sexual impropriety. We found that while both candidates suffered from the accusation, the scandal led respondents to view Obama as more liberal than Edwards, especially among resentful and engaged whites. Second, overall evaluations of Obama declined more sharply than for Edwards. In the other experiment, we manipulated the explicitness of the scandal, and found that implicit cues were more damaging for Obama than explicit ones.

The Indirect Effects of Discredited Stereotypes: Social and Political Traits in Judgments of Jewish Leaders, American Journal of Political Science. 2005. 49(4): 845-864 (with Tali Mendelberg).
We hypothesize that a stereotype can have an indirect impact over judgment even if some of its pieces are rejected. We test this proposition with a national survey-experiment that describes a hypothetical candidate either as “Jewish”, “Jewish” and “Shady”, or neither “Jewish” nor “Shady”. We find that once cued, a social stereotype trait (“Shady”), even though it is rejected as illegitimate, can activate another, more acceptable political trait (“liberal”) that historically has been linked with “Shady”. In turn, voters cued with the social trait give more weight to the acceptable political trait in evaluating the candidate, and the candidate’s support suffers, especially among conservative voters. This indirect influence of discredited stereotypes has implications for our understanding of the way stereotypes influence political judgments and for the ability of groups to overcome a legacy of discrimination.

War and Public Opinion

Facial Dominance and Electoral Success in Times of War and Peace Journal of Politics. 2019. 81(3): 1096-1100. (With Sara Chatfield and Gabriel Lenz).
Do voters prefer dominant looking candidates in times of war? By replicating previous survey experiments, we find that respondents do prefer candidates with dominant facial features when war is salient. We then investigate whether these survey results generalize to the real world. Examining US Senate elections from 1990 to 2006, we test whether voters prefer candidates with dominant facial features in wartime elections more than in peacetime elections. In contrast with the survey studies, we find that dominant-looking candidates appear to gain a slight advantage in all elections but have no special advantage in wartime contexts. We discuss possible explanations for the discrepancy between the findings and conduct additional experiments to investigate one possible explanation: additional information about candidates may rapidly erode the wartime preference for dominant looking candidates. Overall, our findings suggest that the dominance-war findings may not generalize to the real world.

Assuming the Costs of War: Events, Elites, and American Public Support for Military ConflictJournal of Politics. 2007. 69(4): 975-997.
Many political scientists and policymakers argue that unmediated events—the successes and failures on the battlefield—determine whether the mass public will support military excursions. The public supports war, the story goes, if the benefits of action outweigh the costs of conflict. Other scholars contend that the balance of elite discourse influences public support for war. I draw upon survey evidence from World War II and the current war in Iraq to come to a common conclusion regarding public support for international interventions. I find little evidence that citizens make complex cost/benefit calculations when evaluating military action. Instead, I find that patterns of elite conflict shape opinion concerning war. When political elites disagree as to the wisdom of intervention, the public divides as well. But when elites come to a common interpretation of a political reality, the public gives them great latitude to wage war.

Public Opinion Research and Support for The Iraq War. Public Opinion Quarterly. 2007. 71(1): 126-141 (with James Druckman).
Professors Peter Feaver, Christopher Gelpi, and Jason Reifler’s theory of the determinants of public support for war has received a great deal of attention among academics, journalists, and policymakers. They argue that support for war hinges on initial support for military action and the belief in the success of the war. In this review, we take a critical and constructive view of their work, focusing on methodological concerns. We discuss the dependent variable used by the authors—individual casualty tolerance—and argue that it is an insufficient measure of war support. We also make the case that their independent variables of interest—initial support for war and evaluation of war success—may, in fact, be best understood as indicators of latent support for the war more generally. Finally, we discuss the need for more research into the determinants of support for war, focusing on core values and elite rhetoric as potential variables for continued and future study.

Political Participation

Education and Political Participation: Exploring the Causal Link. Politicial Behavior. 2011. 33(3):357-373 (with Gabriel Lenz).
One of the most consistently documented relationships in the field of political behavior is the close association between educational attainment and political participation. Although most research assumes that this association arises because education causes participation, it could also arise because education proxies for the factors that lead to political engagement: the kinds of people who participate in politics may be the kinds of people who tend to stay in school. To test for a causal effect of education, we exploit the rise in education levels among males induced by the Vietnam draft. We find little reliable evidence that education induced by the draft significantly increases participation rates.

The Perverse Consequences of Electoral Reform in the United States, American Politics Research 33(3): 471-491 (2005)
A number of electoral reforms have been enacted in the United States in the last three decades. These reforms include: day-of-election registration, “motor voter” registration laws, voting-by-mail (VBM), early voting, and the relaxing of stringent absentee balloting procedures. Such reforms are designed to increase turnout by easing restrictions on voter registration and/or ballot casting. Both proponents and opponents of electoral reforms agree that these reforms should increase the demographic representativeness of the electorate by reducing the direct costs of voting, thereby increasing turnout among less-privileged groups. In fact, these reforms have been greatly contested because both major political parties believe that increasing turnout among less-privileged groups will benefit Democratic politicians (though see Wolfinger and Rosenstone 1980). In practice, these electoral reforms have increased turnout slightly, but have not had the hypothesized partisan effects. What has not been widely recognized, however, is that this wave of reforms has exacerbated the socioeconomic biases of the electorate. This result is surprising only because many politicians and scholars have focused on tangible barriers to voting such as difficult registration procedures and the process of casting a ballot. By this logic, lowering or erasing the barriers to voting should enable all citizens to cast a vote, regardless of their personal circumstances. However, the direct costs of registration and getting to the ballot box are only part of the story. The more important costs – and the roots of the persistent compositional bias in the electorate – are the cognitive costs of becoming engaged with and informed about the political world. Reforms designed to make voting “easier” exacerbate the existing biases in the composition of the electorate by ensuring that those citizens who are most engaged with the political world continue to participate.  That is, voting reforms encourages the retention of likely voters from election to election rather than encouraging new voters to enter the electorate. Thus, no matter how low the direct barriers to casting a ballot are set, the only way to increase turnout and eliminate socioeconomic biases of the voting population is to increase the engagement of the broader mass public with the political world. Political information and interest, not the high tangible costs of the act of voting, are the real barriers to a truly democratic voting public.

Who Votes by Mail? A Dynamic Model of the Individual-Level Consequences of Voting-by-Mail System. Public Opinion Quarterly. 2001. 65: 178-197 (with Nancy Burns and Michael W. Traugott). 
Election administrators and public officials often consider changes in electoral laws, hoping that these changes will increase voter turnout and make the electorate more reflective of the voting-age population. The most recent of these innovations is voting-by-mail (VBM), a procedure by which ballots are sent to an address for every registered voter. Over the last 2 decades, VBM has spread across the United States, unaccompanied by much empirical evaluation of its impact on either voter turnout or the stratification of the electorate. In this study, we fill this gap in our knowledge by assessing the impact of VBM in one state, Oregon. We carry out this assessment at the individual level, using data over a range of elections. We argue that VBM does increase voter turnout in the long run, primarily by making it easier for current voters to continue to participate, rather than by mobilizing nonvoters into the electorate. These effects, however, are not uniform across all groups in the electorate. Although VBM in Oregon does not exert any influence on the partisan composition of the electorate, VBM increases, rather than diminishes, the resource stratification of the electorate. Contrary to the expectations of many reformers, VBM advantages the resource-rich by keeping them in the electorate, and VBM does little to change the behavior of the resource-poor. In short, VBM increases turnout, but it does so without making the electorate more descriptively representative of the voting-age population.

Measuring Public Opinion

Measuring Public Opinion with Surveys Annual Review of Political Science. 2017. 20: 309-329.
How can we best gauge the political opinions of the citizenry? Since their emergence in the 1930s, opinion polls-or surveys-have become the dominant way to assess the public will. But even given the long history of polling, there is no agreement among political scientists on how to best measure public opinion through polls. This article is a call for political scientists to be more self-conscious about the choices we make when we attempt to measure public opinion with surveys in two realms. I first take up the question of whom to interview, discussing the major challenges survey researchers face when sampling respondents from the population of interest. I then discuss the level of specificity with which we can properly collect information about the political preferences of individuals. I focus on the types of question wording and item aggregation strategies researchers can use to accurately measure public opinion.

Missing Voices: Polling and Health Care. Journal of Health Politics, Policy and Law. 2011. 36(6):975-987 (with Michele Margolis).
Examining data on the recent health care legislation, we demonstrate that public opinion polls on health care should be treated with caution because of item nonresponse — or “don’t know” answers — on survey questions. Far from being the great equalizer, opinion polls can actually misrepresent the attitudes of the population. First, we show that respondents with lower levels of socioeconomic resources are systematically more likely to give a “don’t know” response when asked their opinion about health care legislation. Second, these same individuals are more likely to back health care reform. The result is an incomplete portrait of public opinion on the issue of health care in the United States.

“Don’t Knows” and Public Opinion Towards Economic Reform: Evidence from RussiaCommunist and Post-Communist Studies. 2006. 39 (1): 73-99. (with Joshua Tucker).
As market reform has spread throughout the globe, both scholars and policy makers have become increasingly interested in measuring public opinion towards economic changes. However, recent research from American politics suggests that special care must be paid to how surveys treat non-respondents to these types of questions. We extend this line of inquiry to a well-known case of large-scale economic reform, Russia in the mid-1990s. Our major finding is that Russians who fail to answer survey questions tend to be consistently less “liberal” than their counterparts who are able to answer such questions. This finding has implications both for our understanding of Russian public opinion in the 1990s, as well as for measuring attitudes towards economic reform more generally.

Political Context and the Survey Response: The Dynamics of Racial Policy OpinionJournal of Politics. 2002. 64: 567-584.
Several recent studies suggest that the social dynamics at work in the survey interview may play a significant role in determining the answers individuals give to survey questions, most notably on questions relating to racial policies. In this paper I reexamine and extend the conclusions of “The Two Faces of Public Opinion” (Berinsky 1999). In the present day, I find opinion polls overstate support for policies designed to promote racial equality. In this paper, I use data from the early 1970s to show that the strong social desirability effects I find in the 1990s do not characterize opinion in earlier eras. The analyses reported here indicate that while we need to pay attention to and account for the social context surrounding sensitive issues when gauging public opinion, we must also pay attention to changes in that context over time.

Silent Voices: Social Welfare Policy Opinions and Political Equality in AmericaAmerican Journal of Political Science. 2002.46: 276-288.
I demonstrate that both inequalities in politically relevant resources and the larger political culture surrounding social welfare policy issues disadvantage those groups who are natural supporters of the welfare state. These supporters – the economically disadvantaged and those who support principles of political equality – are less easily able to form coherent and consistent opinions on such policies than those well endowed with politically relevant resources. Those predisposed to champion the maintenance and expansion of welfare state programs are, as a result, less likely to articulate opinions on surveys. Thus, public opinion on social welfare policy controversies gives disproportionate weight to respondents opposed to expanding the government’s role in the economy. This “exclusion bias” – a phenomenon to this point ignored in the political science literature — is a notable source of bias in public opinion: the “voice” of those who abstain from the social welfare policy questions is different from those who respond to such items. This result mirrors the patterns of inequality found in traditional forms of political participation. Opinion polls may therefore reinforce, not correct, the inegalitarian shortcomings of traditional forms of political participation.

The Two Faces of Public OpinionAmerican Journal of Political Science.1999. 43: 1209-1230. 
Public opinion polls appear to be a more inclusive form of representation than traditional forms of political participation. However, under certain circumstances, aggregate public opinion may be a poor reflection of collective public sentiment. I argue that it may be difficult to gauge true aggregate public sentiment on certain socially sensitive issues. My analysis of NES data from 1992 reveals that public opinion polls overstate support for government efforts to integrate schools. Specifically, selection bias models reveal that some individuals who harbor anti-integrationist sentiments are likely to hide their socially unacceptable opinions behind a “don’t know” response. As an independent confirmation of the selection bias correction technique, I find that the same methods which predict that opinion polls understate opposition to school integration also predict the results of the 1989 New York City mayoral election more accurately than the marginals of pre-election tracking polls.

Historical Public Opinion

An Empirical Justification for the Use of Draft Lottery Numbers as a Random Treatment in Political Science Research. Political Analysis. 2015. 23(3): 449-454. (with Sara Chatfield).
Over the past several years, there has been growing use of the draft lottery instrument to study political attitudes and behaviors (see, e.g., Bergan 2009; Erikson and Stoker 2011; Henderson 2012; Davenport 2015). Draft lotteries, held in the United States from 1969 to 1972, provide a potentially powerful design; in theory, they should provide true randomization for the “treatment” of military service or behavioral reactions to the threat of such service. However, the first draft lottery conducted in 1969 was not conducted in a random manner, giving those citizens born in the fourth quarter of the year disproportionately higher chances of being drafted. In this note, we describe the randomization failure and discuss how this failure could in theory compromise the use of draft lottery numbers as an instrumental variable. We then use American National Election Studies data to provide support for the conclusion that individuals most affected by the randomization failure (those born in the fourth quarter of the year) largely do not look statistically distinct from those born at other times of the year. With some caveats, researchers should be able to treat the 1969 draft numbers as if they were assigned at random. We also discuss broader lessons to draw from this example, both for scholars interested in using the draft lottery as an instrumental variable, and for researchers leveraging other instruments with randomization failures. Specifically, we suggest that scholars should pay particular attention to the sources of randomization failure, sample attrition, treatment and dependent variable selection, and possible failure of the exclusion restriction, and we outline ways in which these problems may apply to the draft lottery instrument and other natural experiments.

Red Scare? Revisiting Joe McCarthy’s Influence on 1950s Elections. Public Opinion Quarterly. 2014. 78(2): 369-391. (with Gabriel Lenz).
In the early 1950s, politicians apparently allowed themselves to be spectators to the anticommunist witch hunt of Senator Joe McCarthy and his supporters, leading to a particularly grim chapter in American politics. In part, they did so because they thought the public supported McCarthy. Although politicians lacked contemporary public opinion data, they apparently inferred McCarthy’s support from key Senate race outcomes. The few senators who initially stood up to McCarthy lost their reelections when McCarthy campaigned against them. In this article, we revisit the case of McCarthy’s influence and investigate whether politicians fundamentally misinterpreted support for McCarthy. Using county- and state-level election data from across the twentieth century, we develop plausible counterfactual measures of normal electoral support to assess McCarthy’s influence on electoral outcomes. We adopt a variety of analytic strategies that lead to a single conclusion: There is little evidence that McCarthy’s attacks mattered to the election outcomes. Our results imply that politicians can greatly err when interpreting the meaning of elections, and point to the importance of research on elections to help prevent such errors.

Revisiting Public Opinion in the 1930s and 1940s. PS: Political Science & Politics. 2011. 44(2):515-520 (with Eleanor Neff Powell, Eric Schickler,and Ian Brett Yohai).
Studies of mass political attitudes and behavior before the 1950s have been limited by a lack of high-quality, individual-level data. Fortunately, data from public opinion polls conducted during the late New Deal and World War II periods are available, although the many difficulties of working with these data have left them largely untouched for over 60 years. We compiled and produced readily usable computer files for over 400 public opinion polls undertaken between 1936 and 1945 by the four major survey organizations active during that period. We also developed a series of weights to ameliorate the problems introduced by the quota-sampling procedures employed at the time. The corrected data files and weights were released in May 2011. In this article, we briefly discuss the data and weighting procedures and then present selected time series determined using questions that were repeated on 10 or more surveys. The time series provide considerable leverage for understanding the dynamics of public opinion in one of the most volatile—and pivotal—eras in American history.

American Public Opinion in the 1930s and 1940s: The Analysis of Quota-Controlled Sample Survey DataPublic Opinion Quarterly. 2006. 70(4): 499-529.
The 1930s saw the birth of mass survey research in America. Large public polling companies, such as Gallup and Roper, began surveying the public about a variety of important issues on a monthly basis. These polls contain information on public opinion questions of central importance to political scientists, historians, and policymakers, yet these data have been largely overlooked by modern researchers due to problems arising from the data collection methods. In this article I provide a strategy to properly analyze the public opinion data of the 1930s and 1940s. I first describe the quota-control methods of survey research prevalent during this time. I then detail the problems introduced through the use of quota-control techniques. Next, I describe specific strategies that researchers can employ to ameliorate these problems in data analysis at both the aggregate and individual levels. Finally, I use examples from several pubic opinion studies in the early 1940s to show how the methods of analysis laid out in this article enable us to utilize historical public opinion data.

Survey Data Collection and Quality

Measuring Attentiveness in Self-Administered SurveysPublic Opinion Quarterly. 2024. (with Alejandro Frydman, Michele Margolis, Michael Sances and Diana Camilla Valerio).
The surge in online self-administered surveys has given rise to an extensive body of literature on respondent inattention, also known as careless or insufficient effort responding. This burgeoning literature has outlined the consequences of inattention and made important strides in developing effective methods to identify inattentive respondents. However, differences in terminology, as well as a multiplicity of different methods for measuring and correcting for inattention, have made this literature unwieldy. We present an overview of the current state of this literature, highlighting commonalities, emphasizing key debates, and outlining open questions deserving of future research. Additionally, we emphasize the key considerations that survey researchers should take into account when measuring attention.

Racing the Clock: Using Response Time as a Proxy for Attentiveness on Self – Administered SurveysPolitical Analysis. 2022. (with Blair Read and Lukas Wolters).
Internet-based surveys have expanded public opinion data collection at the expense of monitoring respondent attentiveness, potentially compromising data quality. Researchers now have to evaluate attentiveness ex-post. We propose a new proxy for attentiveness – response-time attentiveness clustering (RTAC) – that uses dimension reduction and an unsupervised clustering algorithm to leverage variation in response time between respondents and across questions. We advance the literature theoretically arguing that the existing dichotomous classification of respondents as fast or attentive is insufficient and neglects slow and inattentive respondents. We validate our theoretical classification and empirical strategy against commonly used proxies for survey attentiveness. In contrast to other methods for capturing attentiveness, RTAC allows researchers to collect attentiveness data unobtrusively without sacrificing space on the survey instrument. 

Using Screeners to Measure Respondent Attention on Self-Administered Surveys: Which Items and How Many? Political Science Research and Methods. 2021. (With Michele Margolis Michael Sances, and Christopher Warshaw).
Inattentive respondents introduce noise into data sets, weakening correlations between items and increasing the likelihood of null findings. “Screeners” have been proposed as a way to identify inattentive respondents, but questions remain regarding their implementation. First, what is the optimal number of Screeners for identifying inattentive respondents? Second, what types of Screener questions best capture inattention? In this paper, we address both of these questions. Using item-response theory to aggregate individual Screeners we find that four Screeners are sufficient to identify inattentive respondents. Moreover, two grid and two multiple choice questions work well. Our findings have relevance for applied survey research in political science and other disciplines. Most importantly, our recommendations enable the standardization of Screeners on future surveys. 

Achieving Efficiency without Losing Accuracy: Strategies for Scale Reduction with an Application to Risk Attitudes and Racial Resentment Social Science Quarterly. 2018. 99(2): 563-582. (With Krista Loose and Yue Hue).
Objectives. Researchers often employ lengthy survey instruments to tap underlying phenomena of interest. However, concerns about the cost of fielding longer surveys and respondent fatigue can lead scholars to look for abbreviated, yet accurate, variations of longer, validated scales. In this article, we provide a template to aid in scale reduction. Methods. The template we develop walks researchers through a procedure for using existing data to consider all possible subscales along several reliability and validity criteria. We apply our method to two commonly used scales: the seven-item Risk Attitudes Scale and the six-item Racial Resentment Scale. Results. After applying the template, we find a four-item Risk Attitudes Scale that maintains nearly identical reliability and validity as the full scale and a three-item Racial Resentment Subscale that outperforms the two-item Subscale currently used in a major congressional survey. Conclusions. Our general template should be of use to a broad range of scholars seeking to achieve efficiency without losing accuracy when reducing lengthy scales. The code to implement our procedures is available as an R package, ScaleReduce. 

Can We Turn Shirkers into Workers? Journal of Experimental Social Psychology. 2016. 20-28. (With Michele Margolis and Michael Sances).
Survey researchers increasingly employ attention checks to identify inattentive respondents and reduce noise. Once inattentive respondents are identified, however, researchers must decide whether to drop such respondents, thus threatening external validity, or keep such respondents, thus threatening internal validity. In this article, we ask whether there is a third way: can inattentive respondents be induced to pay attention? Using three different strategies across three studies, we show that while such inducements increase attention check passage, they do not reduce noise in descriptive or experimental survey items. In addition, the inducements cause some respondents to drop out of the survey. These results have important implications for applied research. While scholars should continue to measure inattention via attention checks, increasing the attentiveness of “shirker” respondents is not as easy as previously thought.

Separating the Shirkers from the Workers? Making Sure Respondents Pay Attention on Self-Administered Surveys. American Journal of Political Science. 2014. 58(3): 739-753 (with Michele Margolis and Michael Sances).
Good survey and experimental research requires subjects to pay attention to questions and treatments, but many subjects do not. In this article, we discuss “Screeners” as a potential solution to this problem. We first demonstrate Screeners’ power to reveal inattentive respondents and reduce noise. We then examine important but understudied questions about Screeners. We show that using a single Screener is not the most effective way to improve data quality. Instead, we recommend using multiple items to measure attention. We also show that Screener passage correlates with politically relevant characteristics, which limits the generalizability of studies that exclude failers. We conclude that attention is best measured using multiple Screener questions and that studies using Screeners can balance the goals of internal and external validity by presenting results conditional on different levels of attention.

Evaluating Online Labor Markets for Experimental Research: Amazon.com’s Mechanical Turk. Politicial Analysis. 2012. 20(3):351-368 (with with Gregory Huber and Gabriel Lenz).
We examine the trade-offs associated with using Amazon.com’s Mechanical Turk (MTurk) interface for subject recruitment. We first describe MTurk and its promise as a vehicle for performing low-cost and easy-to-field experiments. We then assess the internal and external validity of experiments performed using MTurk, employing a framework that can be used to evaluate other subject pools. We first investigate the characteristics of samples drawn from the MTurk population. We show that respondents recruited in this manner are often more representative of the U.S. population than in-person convenience samples—the modal sample in published experimental political science—but less representative than subjects in Internet-based panels or national probability samples. Finally, we replicate important published experimental work using MTurk samples.

Can We Talk? Self Presentation and the Survey ResponsePolitical Psychology. 2004. 25(4): 643-659.
In recent years, there has been a movement among scholars of public opinion to consider more fully the effect of the social forces at work in the survey interview. These authors recognize that the survey interview is a “conversation at random” (Converse and Schuman 1974) and acknowledge that, as a result, the interview will be governed by many of the same dynamics as everyday conversations, such as social desirability concerns. In some cases, these effects may play a large role in determining the answers individuals give to survey questions. In this paper, I extend this work and study how the personality characteristics of individuals may affect the answers they give to questions on controversial political topics. In April-May, 2000, I conducted a random-digit-dial survey of 500 respondents in the continental United States. This survey asked the respondent to give their opinion on a number of sensitive topics, such as feelings towards blacks and homosexuals and spending on popular programs, such as schools and the environment. The survey also included question batteries measuring two psychological concepts related to self-presentation: Self-Monitoring (Snyder 1986) and Impression Management (Paulhus 1991). These batteries have never before been asked on a national-sample survey. I first discuss the importance of attending to self-presentation concerns in social surveys, drawing on work in social psychology and sociolingusitics. I next describe two scales developed by psychologists to measure individual variation in self-presentation concerns. I then move to an empirical examination of my survey. I begin by analyzing the two self-presentation scales to see if they are appropriate for use in surveys. I then ascertain how the answers respondents give to the survey questions vary as a function of their self-presentation personality characteristics. I conclude with a suggestion of how the self-presentation measures can be used to better understand the effects of social dynamics in the survey interview on respondents’ answers to opinion questions about sensitive topics.

Other Papers

Publication Biases in Replication StudiesPolitical Analysis. 2021. (with James Druckman and Teppei Yamamoto).
One of the strongest findings across the sciences is that publication bias occurs. Of particular note is a “file drawer bias” where statistically significant results are privileged over nonsignificant results. Recognition of this bias, along with increased calls for “open science,” has led to an emphasis on replication studies. Yet, few have explored publication bias and its consequences in replication studies. We offer a model of the publication process involving an initial study and a replication. We use the model to describe three types of publication biases: (1) file drawer bias, (2) a “repeat study” bias against the publication of replication studies, and (3) a “gotcha bias” where replication results that run contrary to a prior study are more likely to be published. We estimate the model’s parameters with a vignette experiment conducted with political science professors teaching at Ph.D. granting institutions in the United States. We find evidence of all three types of bias, although those explicitly involving replication studies are notably smaller. This bodes well for the replication movement. That said, the aggregation of all of the biases increases the number of false positives in a literature. We conclude by discussing a path for future work on publication biases.

Mistrust in Science — A Threat to the Patient–Physician Relationship The New England Journal of Medicine. 2019. 381: 182-185. (With Richard J. Baron).
Trust is the foundation of relationships between patients and the health care system. But assumptions about how trust is created and maintained have to be reexamined on the basis of an understanding of evolving roles of facts, expertise, and authority in our society.

Stress-Testing the Affect Misattribution Procedure: Heterogeneous Control of AMP Effects under Incentives British Journal of Social Psychology. 2018. 57:61-74. (with Chad Hazlett).
The affect misattribution procedure (AMP) is widely used to measure sensitive attitudes towards classes of stimuli, by estimating the effect that affectively charged prime images have on subsequent judgements of neutral target images. We test its resistance to efforts to conceal one’s attitudes, by replicating the standard AMP design while offering small incentives to conceal attitudes towards the prime images. We find that although the average AMP effect remains positive, it decreases significantly in magnitude. Moreover, this reduction in the mean AMP effect under incentives masks large heterogeneity: one subset of individuals continues to experience the ‘full’ AMP effect, while another reduces their effect to approximately zero. The AMP thus appears to be resistant to efforts to conceal one’s attitudes for some individuals but is highly controllable for others. We further find that those individuals with high self-reported effort to avoid the influence of the prime are more often able to eliminate their AMP effect. We conclude by discussing possible mechanisms.

An Estimate of Risk Aversion in the U.S. ElectorateQuarterly Journal of Political Science. 2007. 2(2): 139-154 (with Jeffrey Lewis).
Recent work in political science has taken up the question of issue voting under conditions of uncertainty – situations in which voters have imperfect information about the policy positions of candidates. We move beyond the assumption of a particular spatial utility function and develop a model to estimate voters’ preferences for risk. Contrary to the maintained hypothesis in the literature, voters do not appear to have the strongly risk averse preferences implied by quadratic preferences.

Transitional Winners and Losers: Attitudes Toward EU Membership In Post-Communist Countries. American Journal of Political Science. 2002. 46:557-571 (with Joshua Tucker and Alexander Pacek).
We present a model of citizen support for EU membership designed explicitly for post-communist countries.  We posit that membership in the EU can function as an implicit guarantee that the economic reforms undertaken since the end of communism will not be reversed.  On this basis, we predict that “winners” who have benefited from the transition, are more likely to support EU membership for their country than “losers” who have been hurt by the transition.  We also predict that supporters of the free market will be more likely to support EU membership than those who oppose the free market.  We predict that these effects will be present even controlling for demographic effects.  We test these propositions using survey data from ten post-communist countries that have applied for membership in the EU and find strong support for our hypotheses.  The article concludes by speculating about the role attitudes towards EU membership may play in the development of partisan preferences.

Transitional Survey Analysis: Measuring Bias in Russian Public OpinionCommunist and Post-Communist Studies. 2006. 39(1): 73-99 (with Joshua Tucker).
For scholars, an exciting feature of the transitions occurring in the former Soviet Union and Eastern Europe has been the opportunity to study these transitions first hand using modern social scientific tools. Perhaps no technique has been adopted as enthusiastically as survey research. Of course, the analysis of survey data has long been popular in more established democracies, such as the United States and Western Europe. In fields where the use of survey data has long been the norm, researchers have devoted a considerable amount of effort to dealing with the methodological complexities of using survey data analyses. One particular concern involves the treatment of “don’t know” respondents. In the following paper, therefore, we extend this line of research to survey data from the Russian Federation. Our major finding is that Russians who fail to answer survey questions tend to be less “liberal” than their counterparts who are able to answer survey questions. We use liberal here in the classical sense of the word as it has been applied to post-communist politics: pro-market reform, opposed to redistributive policies, pro-civil rights, and pro-Western foreign policy. Of these issue areas, the results are most clearly seen in the area of market reform, where its seems that Russia’s silent voices were largely apprehensive of change.