How to Address an Infodemic: Experiments on (Dis)Information

28 April, London and online

PERITIA is organising the workshop ‘How to Address an Infodemic: Experiments on (Dis)Information’ led by PERITIA members Carlo Martini (UniSR – Milan) as programme organiser and Liam Delaney (LSE, London) and Bobby Duffy (The Policy Institute – King’s College London) as local organisers. 

The event brought together international experts working on experiments and disinformation, including John Cook (Monash Climate Change Communication Research Hub), Anastasia Kozyreva (Max Plank Institute for Human Development, Berlin) and Myrto Pantazi (Université libre de Bruxelles, Brussels), among others. A total of 10 presentations and a final panel discussion with invited speakers and members of the PERITIA project will compose a full-day programme.

The workshop took place in a hybrid format at the London School of Economics in London on 28 April, from 9 AM (London time).

You can access the full list of video recordings here.

Programme (London time)

9:00 – 9:10 | Introduction

Carlo Martini (UniSR – Milan and University of Helsinki)

9:10 – 9:45 | Presentation 1: Fact-check or logic-check? Exploring different ways to counter misinformation

John Cook (Monash Climate Change Communication Research Hub)

Misinformation damages society in many ways. Inoculation theory offers a framework for building public resilience against misinformation by exposing people to a weakened form of misinformation. Inoculation can be applied in a variety of ways. In this presentation, John Cook will explore two complementary approaches – facts and logic. Factual explanations can show how misconceptions are false, while logical explanations of the rhetorical techniques used to mislead can also reduce the effectiveness of misinformation. Both approaches have been shown to be effective but there are subtle differences in the impact of either method. This presentation will explore research into these approaches and how these findings were applied in the critical thinking smartphone game Cranky Uncle.

9.45 – 10.20 | Presentation 2: Toolbox of interventions against online misinformation and manipulation

Anastasia Kozyreva (Max Plank Institute for Human Development)

Spread of false and misleading information in online social networks is a global problem in need of urgent solutions. It is also a policy problem, because misinformation can constitute harm to the public health and lead to adverse consequences for democracies. One of the key policy questions is how to prevent the spread of misinformation without resorting to censorship? Behavioral and cognitive interventions aim to address this challenge. In the present project, we conducted an expert review of such tools and interventions. This review focuses on two points of interest: (1) a conceptual overview of the interventions and (2) an overview of empirical evidence supporting them and of methods used to test them. The review is a collaborative effort of an international group of experts and it covers the following interventions: Accuracy prompts; Debunking; Inoculation; Lateral reading in education and online; Media literacy tips; Rebuttals of science denialism; Self-reflection tools; Social norms; Warning & fact-checking labels.

10.20 – 10.55 | Presentation 3: The Truth bias in a post-truth world: The bad and the good news

Myrto Pantazi (Université libre de Bruxelles)

We arguably live in an era of misinformation and “fake news”. A crucial question is what  people do in the face of misinformation they encounter: do they tend to believe it or are they capable of evaluating it and rejecting it? I will be presenting experimental research suggesting that humans have a strong but reversible tendency to believe information they process. In several experiments we presented participants with fictional crime reports containing information tagged as true or false. We measured whether people tend to believe this information by testing both whether they would accurately remember the falsity of false statements and whether their judgements about the crime would be affected by the false statements. We consistently find that people tend to misremember false statements as true (more so than they misremember true statements as false) and that their judgments are contaminated by the false information. These result patterns strongly support the existence of a truth bias in the processing of linguistically communicated information, that is, a default tendency to believe information we hear and read. Our studies also suggest that the truth bias is particularly resistant to factors that could be reasonably expected to mitigate it (such as accountability and professional experience in false-truth discernment) but provide evidence that high motivation can be a mitigating factor. Interestingly, increased motivation leads people to process information in a more vigilant way but cannot “correct” for misinformation that has already been encountered. I will conclude by discussing the implications of these findings in view of the contemporary media landscape. 

10.55 – 11.25 | Break 1

11.25 – 12.00 | Presentation 4: Measuring and reducing misinformation susceptibility

Sander van der Linden & Jon Roozenbeek (Cambridge Social Decision-Making Lab)

In this talk, we address two pertinent questions in misinformation research. First, although the study of misinformation has become increasingly popular in recent years, measuring individual misinformation susceptibility has often been done using ad-hoc item sets, without psychometric validation. This lack of standardisation makes it difficult to compare results across studies. Using the Misinformation Susceptibility Test (MIST), a psychometrically validated set of true and false news headlines that provides a standardised measure of misinformation susceptibility, we show that the use of different question framings (reliability, accuracy, etc.) and response modes (binary, 6-point and 7-point scales) should not result in major variations in response patterns, allowing for cross-study comparability. We also find that myside bias (i.e., open-mindedness) and political partisanship are more robust predictors of misinformation susceptibility than analytical thinking. Second, although psychological “inoculation” against common misinformation techniques is a promising avenue for reducing individual misinformation susceptibility, its scalability has been elusive. To address this issue, we designed five short, entertaining “inoculation” videos, each covering a different misinformation technique. We show that these videos are highly effective at conferring psychological resistance against future manipulation attempts, including in an ecologically valid field study on YouTube.

12.00 – 12.35 | Presentation 5: “I think this news is accurate”: Endorsing accuracy decreases the sharing of fake news, increases the sharing of real news, and keeps overall engagement constant

Tatiana Celadin (University of Bologna)

Accuracy prompts, nudges that make accuracy salient, are receiving considerable attention as a tool for fighting the spread of fake news, but they have one major limitation: they decrease overall sharing. This limits their applicability, because overall engagement represents one of the main profit motives for social media companies. Here, we overcome this limitation. We report four preregistered studies showing that an “endorsing accuracy” prompt (“I think this news is accurate”), placed on the sharing button, decreases fake news sharing, increases real news sharing, and keeps overall engagement constant. We also explore the mechanism through which the intervention works. The key results are specific to endorsing accuracy, rather than accuracy salience, and endorsing accuracy does not simply make participants apply a “source heuristic”. Finally, we use Pennycook et al.’s limited-attention model to argue that endorsing accuracy may work by strengthening the link between accuracy beliefs and sharing decision.

12.35 – 14.00 | Lunch Break

14.00 – 14.35 | Presentation 6: Understanding and Reducing Online Misinformation Across 16 Countries on Six Continents

David Rand (Sloan School & Brain and Cognitive Sciences, MIT)

The spread of misinformation online is a global problem that requires global solutions. To that end, we conducted an experiment in 16 countries across 6 continents (N = 33,480) to investigate predictors of susceptibility to misinformation and interventions to combat misinformation. In every country, participants with a more analytic cognitive style and stronger accuracy-related motivations were better at discerning truth from falsehood; valuing democracy was also associated with greater truth discernment whereas political conservatism was negatively associated with truth discernment in most countries. Subtly prompting people to think about accuracy was broadly effective at improving the veracity of news that people were willing to share, as were minimal digital literacy tips. Finally, crowdsourced accuracy evaluation was able to differentiate true from false headlines with high accuracy in all countries. The consistent patterns we observe suggest that the psychological factors underlying the misinformation challenge are similar across the globe, and that similar solutions may be broadly effective. For details, see a pre-print PDF at https://psyarxiv.com/a9frz or a tweet thread summary at https://twitter.com/DG_Rand/status/1493946312353619981

14.35 – 15.10 | Presentation 7: A simple intervention to inoculate citizens against microtargeting

Stefan Herzog (Max Planck Institute for Human Development)

Online platforms’ data give advertisers the ability to “microtarget” recipients’ personal vulnerabilities by tailoring different messages for the same thing, such as a product or political candidate. One possible response is to raise awareness for and resilience against such manipulative strategies through psychological inoculation. Two online experiments (total N = 828) demonstrated that a short, simple intervention prompting participants to reflect on an attribute of their own personality—by completing a short personality questionnaire—boosted their ability to accurately identify ads that were targeted at them by up to 26 percentage points. Accuracy increased even without personalized feedback, but merely providing a description of the targeted personality dimension did not improve accuracy. I argue that such a “boosting approach,” which here aims to improve people’s competence to detect manipulative strategies themselves, should be part of a policy mix aiming to increase platforms’ transparency and user autonomy. Furthermore, I will suggest that this approach—inoculating people against strategies that try to exploit their personal vulnerabilities—can also be applied in the domain of politics and misinformation more broadly.

15.10 – 15.45 | Presentation 8: The effects of repetition on belief in naturalistic settings

Lisa Fazio (Vanderbilt University)

Repetition increases belief in false statements. This illusory truth effect occurs with many different types of statements (e.g., trivia facts, news headlines, advertisements), and even occurs when the false statement contradicts participants’ prior knowledge. New naturalistic studies show that these effects occur in real-world settings and with real-world misinformation. However, the effects are not inevitable, I will also discuss situations where the repetition has only a minimal effect on belief.

15.45 – 16.15 | Break 2

16.15 – 16.50 | Presentation 9: How incentives increase recognition of scientific (mis)information 

Folco Panizza (IMT School for Advanced Studies Lucca) & Piero Ronzani (International Security and Development Center, Berlin)

The accuracy of information shared online often goes unnoticed as the architecture of social media diverts attention to other characteristics such as surprise and novelty. Recent research has shown that increasing deliberation is a cost-effective intervention to make users more aware of the content they share. Monetary incentives have been shown to be quite effective in this regard, but are not a scalable intervention. In this study, we compared monetary remuneration with two types of non-monetary incentives, leveraging social responsibility and self-image.

16.50 – 17.25 | Presentation 10: Lateral Reading on the Open Internet: A District-Wide Field Study in High School Government Classes 

Joel Breakstone (Stanford History Education Group)

In a study conducted across an urban school district, we tested a classroom-based intervention in which students were taught online evaluation strategies drawn from research with professional fact checkers. Students practiced the heuristic of lateral reading: leaving an unfamiliar website to search the open Web before investing attention in the site at hand. Professional development was provided to high school teachers who then implemented six 50-minute lessons in a district-mandated government course. Using a matched control design, students in treatment classrooms (n = 271) were compared to peers (n = 228) in regular classrooms. A multilevel linear mixed model showed that students in experimental classrooms grew significantly in their ability to judge the credibility of digital content. These findings inform efforts to prepare young people to make wise decisions about the information that darts across their screens.

17.25 – 18-25 | Panel Discussion

The H2020 PERITIA Project and invited speakers