Current Projects

Here I list my current projects (with abstracts). Articles should be downloaded for personal use only. If you click on the paper title, you will be taken to a PDF version of the manuscript. You may need Acrobat to read the files, available free from Adobe.


Misinformation

Examining Partisan Asymmetries in Fake News Sharing and the Efficacy of Accuracy Prompt Interventions. (with Brian Guay, Gordon Pennycook, and David Rand).
The spread of misinformation has become a central concern in American politics. Recent studies of social media sharing suggest that Republicans are considerably more likely to share fake news than Democrats. However, such inferences are confounded by the far greater supply of right-leaning fake news—Republicans may indeed be more prone to sharing fake news, or they may simply be more exposed to it. This article disentangles these competing explanations by examining sharing intentions in a balanced information environment. Using a large national survey of YouGov respondents, we show that Republicans are indeed more prone to sharing ideologically concordant fake news than Democrats, but that this gap is not large enough to explain differences in sharing observed online. Encouragingly, however, we also find that accuracy prompt interventions that reduce the spread of fake news are equally effective across parties, suggesting that fake news sharing among Republicans is not an intractable problem.

Reducing misinformation sharing at scale using digital accuracy prompt ads. (with Hause Lin, Haritz Garro, Nils Wernerfelt, Jesse Conan Shore, Adam Hughes, Daniel Deisenroth, Nathaniel Barr, Dean Eckles, Gordon Pennycook, and David Rand).
Interventions to reduce misinformation sharing have been a major focus in recent years. Developing “content-neutral” interventions that do not require specific fact-checks or warnings related to individual false claims is particularly important in developing scalable solutions. Here, we provide the first evaluations of a content-neutral intervention to reduce misinformation sharing conducted at scale in the field. Specifically, across two on-platform randomized controlled trials, one on Meta’s Facebook (N=33,043,471) and the other on Twitter (N=75,763), we find that simple messages reminding people to think about accuracy—delivered to large numbers of users using digital advertisements—reduce misinformation sharing, with effect sizes on par with what is typically observed in digital advertising experiments. On Facebook, in the hour after receiving an accuracy prompt ad, we found a 2.6% reduction in the probability of being a misinformation sharer among users who had shared misinformation the week prior to the experiment. On Twitter, over more than a week of receiving 3 accuracy prompt ads per day, we similarly found a 3.7% to 6.3% decrease in the probability of sharing low-quality content among active users who shared misinformation pre-treatment. These findings suggest that content-neutral interventions that prompt users to consider accuracy have the potential to complement existing content-specific interventions in reducing the spread of misinformation online.

Partisan consensus and divisions on content moderation of misinformation. (with Cameron Martel, Paul Resnick, Amy Xian Zhang, and David Rand).
Debates on how tech companies ought to oversee the circulation of content on their platforms are increasingly pressing. In the U.S., questions surrounding what, if any, action should be taken by social media companies to moderate harmfully misleading content on topics such as vaccine safety and election integrity are now being hashed out from corporate boardrooms to federal courtrooms. But where does the American public stand on these issues? Here we discuss the findings of a recent nationally representative poll of Americans’ views on content moderation of harmfully misleading content.

Understanding Americans’ perceived legitimacy of harmful misinformation moderation by expert and layperson juries. (with Cameron Martel, David Rand, Amy Xian Zhang, and Paul Resnick).
Content moderation is a critical aspect of platform governance on social media and of particular relevance to addressing the belief in and spread of misinformation. However, current content moderation practices have been criticized as unjust. This raises an important question – who do Americans want deciding whether online content is harmfully misleading? We conducted a nationally representative conjoint survey experiment (N=3,000) in which U.S. participants evaluated the legitimacy of hypothetical content moderation juries tasked with evaluating whether online content was harmfully misleading. These moderation juries varied on whether they were described as consisting of experts (e.g., domain experts), laypeople (e.g., social media users), or non-juries (e.g., computer algorithm). We also randomized features of jury composition (size, necessary qualifications) and whether juries engaged in discussion during content evaluation. Overall, participants evaluated expert juries as more legitimate than layperson juries or a computer algorithm. However, modifying layperson jury features helped increase legitimacy perceptions – nationally representative or politically balanced composition enhanced legitimacy, as did increased size, individual juror knowledge qualifications, and enabling juror discussion., Maximally legitimate layperson juries were comparably legitimate to expert panels. Republicans perceived experts as less legitimate compared to Democrats, but still more legitimate than baseline layperson juries. Conversely, larger lay juries with news knowledge qualifications who engaged in discussion were perceived as more legitimate across the political spectrum. Our findings shed light on the foundations of procedural legitimacy in content moderation and have implications for the design of online moderation systems.

Combating misinformation: A megastudy of nine interventions designed to reduce the sharing of and belief in false and misleading headlines. (with Lisa Fazio, David Rand, Stephan Lewandowsky, Mark Susmann, Andrew Guess, Panayiota Kendeou, Benjamin Lyons, Joanne Miller, Eryn Newman, Gordon Pennycook, and Briony Swire-Thompson).
Researchers have tested a variety of interventions to combat misinformation on social media (e.g., accuracy nudges, digital literacy tips, inoculation, debunking). These interventions work via different psychological mechanisms, but all share the goals of increasing recipients’ ability to distinguish between true and false information and/or increasing the veracity of news shared on social media. The current megastudy with 33,233 US-based participants tests nine prominent misinformation interventions in an identical setting using true, false, and misleading health and political news headlines. We find that a wide variety of interventions can improve discernment between true versus false or misleading information during accuracy and sharing judgments. Reducing misinformation belief and sharing is a goal that is accomplishable through multiple strategies targeting different psychological mechanisms.

Understanding the Impact of AI-Generated Content

Labeling AI-Generated Media Online. (with Chloe Wittenberg, Ziv Epstein, Gabrielle Peloquin-Skulski, and David Rand).
Recent advancements in generative artificial intelligence (AI) have raised widespread concern about the use of this technology to spread audio and visual misinformation. In response, there has been a major push among policymakers and technology companies to label AI-generated media appearing online. It remains unclear, however, what labels are most effective for this purpose. Using two pre-registered survey experiments (total N = 7,579 Americans), we evaluate the consequences of different labeling strategies for viewers’ beliefs and behavior. Overall, we find that all the labels we tested significantly decreased participants’ belief in the presented claims. When it comes to engagement intentions, however, labels that merely informed participants that content was AI-generated tended to have limited impact, whereas labels emphasizing the content’s potential to mislead more strongly influenced self-reported behavior, especially in the first study. Together, these results shed light on the relative advantages and disadvantages of different approaches to labeling AI-generated media.

Data Collection and Quality

Representativeness versus Response Quality: Assessing Nine Opt-In Online Survey Samples. (with Michael Nicholas Stagnaro, James Druckman, Antonio Alonso Arechar, Robb Willer, and David Rand).
Social scientists rely heavily on data collected from human participants via surveys or experiments. To obtain these data, many social scientists recruit participants from opt-in online panels that provide access to large numbers of people willing to complete tasks for modest compensation. In a large study (total N=13,053), we explore nine opt-in non-probability samples of American respondents drawn from panels widely used in social science research, comparing them on three dimensions: response quality (attention, effort, honesty, speeding, and attrition), representativeness (observable demographics, measured attitude typicality, and responding to experimental treatments), and professionalism (number of studies taken, frequency of taking studies, and modality of device on which the study is taken). We document substantial variation across these samples on each dimension. Most notably, we observe a clear tradeoff between sample representativeness and response quality (particularly regarding attention), such that samples with more attentive respondents tend to be less representative, and vice versa. Even so, we find that for some samples, this tension can be largely eliminated by adding modest attention filters to more representative samples. This and other insights enable us to provide a guide to help researchers decide which online opt-in sample is optimal given one’s research question and constraints.