How to Address an Infodemic: Experiments on (Dis)Information

28 April, London and online

PERITIA is organising the workshop ‘How to Address an Infodemic: Experiments on (Dis)Information’ led by PERITIA members Carlo Martini (UniSR – Milan) as programme organiser and Liam Delaney (LSE, London) and Bobby Duffy (The Policy Institute – King’s College London) as local organisers. 

The event will bring together international experts working on experiments and disinformation, including John Cook (Monash Climate Change Communication Research Hub), Anastasia Kozyreva (Max Plank Institute for Human Development, Berlin) and Myrto Pantazi (Université libre de Bruxelles, Brussels), among others. A total of 10 presentations and a final panel discussion with invited speakers and members of the PERITIA project will compose a full-day programme.

The workshop will take place in a hybrid format at the London School of Economics in London on 28 April, from 9 AM (London time). The in-person event is fully booked. Digital participants can register via this Zoom link.

 

Programme (London time)

9:00 – 9:10 | Introduction

Carlo Martini (UniSR – Milan and University of Helsinki)

9:10 – 9:45 | Presentation 1: Fact-check or logic-check? Exploring different ways to counter misinformation

John Cook (Monash Climate Change Communication Research Hub)

9.45 – 10.20 | Presentation 2: Toolbox of interventions against online misinformation and manipulation

Anastasia Kozyreva (Max Plank Institute for Human Development)

10.20 – 10.55 | Presentation 3

Myrto Pantazi (Université libre de Bruxelles)

10.55 – 11.25 | Break 1

11.25 – 12.00 | Presentation 4

Sander van der Linden & Jon Roozenbeek (Cambridge Social Decision-Making Lab)

12.00 – 12.35 | Presentation 5: “I think this news is accurate”: Endorsing accuracy decreases the sharing of fake news, increases the sharing of real news, and keeps overall engagement constant

Tatiana Celadin (University of Bologna)

12.35 – 14.00 | Lunch Break

14.00 – 14.35 | Presentation 6: Understanding and Reducing Online Misinformation Across 16 Countries on Six Continents

David Rand (Sloan School & Brain and Cognitive Sciences, MIT)

14.35 – 15.10 | Presentation 7: A simple intervention to inoculate citizens against microtargeting

Stefan Herzog (Max Planck Institute for Human Development)

15.10 – 15.45 | Presentation 8: The effects of repetition on belief in naturalistic settings

Lisa Fazio (Vanderbilt University)

15.45 – 16.15 | Break 2

16.15 – 16.50 | Presentation 9

Folco Panizza (IMT School for Advanced Studies Lucca) & Piero Ronzani (International Security and Development Center, Berlin)

16.50 – 17.25 | Presentation 10. Lateral Reading on the Open Internet: A District-Wide Field Study in High School Government Classes 

Joel Breakstone (Stanford History Education Group)

17.25 – 18-25 | Panel Discussion

The H2020 PERITIA Project and invited speakers

 

Abstracts

John Cook – Fact-check or logic-check? Exploring different ways to counter misinformation

Misinformation damages society in many ways. Inoculation theory offers a framework for building public resilience against misinformation by exposing people to a weakened form of misinformation. Inoculation can be applied in a variety of ways. In this presentation, John Cook will explore two complementary approaches – facts and logic. Factual explanations can show how misconceptions are false, while logical explanations of the rhetorical techniques used to mislead can also reduce the effectiveness of misinformation. Both approaches have been shown to be effective but there are subtle differences in the impact of either method. This presentation will explore research into these approaches and how these findings were applied in the critical thinking smartphone game Cranky Uncle.

 

Anastasia Kozyreva – Toolbox of interventions against online misinformation and manipulation

Spread of false and misleading information in online social networks is a global problem in need of urgent solutions. It is also a policy problem, because misinformation can constitute harm to the public health and lead to adverse consequences for democracies. One of the key policy questions is how to prevent the spread of misinformation without resorting to censorship? Behavioral and cognitive interventions aim to address this challenge. In the present project, we conducted an expert review of such tools and interventions. This review focuses on two points of interest: (1) a conceptual overview of the interventions and (2) an overview of empirical evidence supporting them and of methods used to test them. The review is a collaborative effort of an international group of experts and it covers the following interventions: Accuracy prompts; Debunking; Inoculation; Lateral reading in education and online; Media literacy tips; Rebuttals of science denialism; Self-reflection tools; Social norms; Warning & fact-checking labels.

 

Myrto Pantazi – TBA

 

Jon Roozenbeek – TBA

 

Tatiana Celadin – “I think this news is accurate”: Endorsing accuracy decreases the sharing of fake news, increases the sharing of real news, and keeps overall engagement constant

Accuracy prompts, nudges that make accuracy salient, are receiving considerable attention as a tool for fighting the spread of fake news, but they have one major limitation: they decrease overall sharing. This limits their applicability, because overall engagement represents one of the main profit motives for social media companies. Here, we overcome this limitation. We report four preregistered studies showing that an “endorsing accuracy” prompt (“I think this news is accurate”), placed on the sharing button, decreases fake news sharing, increases real news sharing, and keeps overall engagement constant. We also explore the mechanism through which the intervention works. The key results are specific to endorsing accuracy, rather than accuracy salience, and endorsing accuracy does not simply make participants apply a “source heuristic”. Finally, we use Pennycook et al.’s limited-attention model to argue that endorsing accuracy may work by strengthening the link between accuracy beliefs and sharing decision.

David Rand – Understanding and Reducing Online Misinformation Across 16 Countries on Six Continents

The spread of misinformation online is a global problem that requires global solutions. To that end, we conducted an experiment in 16 countries across 6 continents (N = 33,480) to investigate predictors of susceptibility to misinformation and interventions to combat misinformation. In every country, participants with a more analytic cognitive style and stronger accuracy-related motivations were better at discerning truth from falsehood; valuing democracy was also associated with greater truth discernment whereas political conservatism was negatively associated with truth discernment in most countries. Subtly prompting people to think about accuracy was broadly effective at improving the veracity of news that people were willing to share, as were minimal digital literacy tips. Finally, crowdsourced accuracy evaluation was able to differentiate true from false headlines with high accuracy in all countries. The consistent patterns we observe suggest that the psychological factors underlying the misinformation challenge are similar across the globe, and that similar solutions may be broadly effective. For details, see a pre-print PDF at https://psyarxiv.com/a9frz or a tweet thread summary at https://twitter.com/DG_Rand/status/1493946312353619981

 

Stefan Herzog – A simple intervention to inoculate citizens against microtargeting

Online platforms’ data give advertisers the ability to “microtarget” recipients’ personal vulnerabilities by tailoring different messages for the same thing, such as a product or political candidate. One possible response is to raise awareness for and resilience against such manipulative strategies through psychological inoculation. Two online experiments (total N = 828) demonstrated that a short, simple intervention prompting participants to reflect on an attribute of their own personality—by completing a short personality questionnaire—boosted their ability to accurately identify ads that were targeted at them by up to 26 percentage points. Accuracy increased even without personalized feedback, but merely providing a description of the targeted personality dimension did not improve accuracy. I argue that such a “boosting approach,” which here aims to improve people’s competence to detect manipulative strategies themselves, should be part of a policy mix aiming to increase platforms’ transparency and user autonomy. Furthermore, I will suggest that this approach—inoculating people against strategies that try to exploit their personal vulnerabilities—can also be applied in the domain of politics and misinformation more broadly.

 

Lisa Fazio – The effects of repetition on belief in naturalistic settings

TBA

Folco Panizza – How incentives increase recognition of scientific (mis)information 

TBA

Joel Breakstone – Lateral Reading on the Open Internet: A District-Wide Field Study in High School Government Classes 

In a study conducted across an urban school district, we tested a classroom-based intervention in which students were taught online evaluation strategies drawn from research with professional fact checkers. Students practiced the heuristic of lateral reading: leaving an unfamiliar website to search the open Web before investing attention in the site at hand. Professional development was provided to high school teachers who then implemented six 50-minute lessons in a district-mandated government course. Using a matched control design, students in treatment classrooms (n = 271) were compared to peers (n = 228) in regular classrooms. A multilevel linear mixed model showed that students in experimental classrooms grew significantly in their ability to judge the credibility of digital content. These findings inform efforts to prepare young people to make wise decisions about the information that darts across their screens.