Rethinking Policy, Expertise and Trust
International Conference | University College Dublin | 23 – 25 March 2023
See below for detailed programme including abstracts or download PDF Programme.
At just over a month shy of PERITIA’s closing event at the European Parliament in Brussels, the UCD O’Brien Centre for Science hosted the project’s final academic conference ‘Rethinking Policy, Expertise and Trust’. A breadth of topics encompassed by PERITIA’s investigation of public trust in expertise was examined by scholars from a wide range of disciplines. Researchers from the fields of philosophy, sociology, political science, media studies, public policy and science communication presented their most recent findings on questions related to key themes of the project, including:
• The scope and nature of public (dis)trust
• Factors contributing to (dis)trust toward experts
• Critical assessments of the role of experts in policy decisions
• The role of social media in establishing (dis)trust
• The key indicators of experts’ trustworthiness
• Psychological, affective, and cognitive mechanisms relevant to (dis)trust
• Empirical studies of trust and trustworthiness
Highlights
Over three days, participants had the opportunity to attend over 50 paper presentations and 13 panel sessions. The conference kicked off with the launch of PERITIA’s new podcast, ‘The Trust Race’, hosted by Shane Bergin, in part based on the PERITIA lecture series [Un]Truths: Trust in an Age of Disinformation. Other highlights included opening remarks by PERITIA Project Leader Maria Baghramian and a roundtable discussion on strategic crisis management in the EU co-organised by SAPEA and the Royal Irish Academy: ‘Why do we need public trust in times of crisis?’ This session was filmed for those who couldn’t be present in Dublin, but for the lucky in-person attendees, a reception followed directly afterward. On Saturday afternoon, the conference closed with a panel comparing perspectives from three Horizon 2020 projects (PERITIA, EnTrust, and TiGRE) as well as concluding remarks by Professor Onora O’Neill.
Download the final programme (without abstracts).
For questions regarding the conference, please send an email to [email protected].
Programme
8.30 – 9.00 | On-Campus Registration | UCD O’Brien Centre for Science
9.00 – 9.30 | Welcome & PERITIA Podcast Launch – ‘The Trust Race’ | Moore Auditorium
Maria Baghramian | University College Dublin
Shane Bergin | University College Dublin
Shaun O’Boyle | Podcast Producer
9.30 – 11.00 | Paper Presentations: Parallel Sessions Day 1
Room: SCIE 1.19
Chair: Silvia Ivani | University College Dublin
9.30 | Jilted Experts and Shifting Standards
Zara Anwarzai | Indiana University
When non-experts openly disavow expert judgement or reject expert guidance, we tend to think something is amiss – perhaps with the rational deliberation of the non-expert, the expert herself, or both. This diagnosis overlooks the extent to which the standards of expertise are context-dependent and group-determined. I take experts to be skilled actors with relatively superior competence to other skilled actors in a domain. What counts as superior competence varies across contexts and need not track a performer’s skill. Whether a violinist is the best performer may depend on whether she evokes an emotional response from the crowd but not whether she is very good at playing in tune or in tempo. What counts as a better performance varies based on the evaluator, standards of evaluation, values and interests of the evaluators, and the role a performance serves. Understanding why standards of expertise shift and what those standards are is critical for understanding why non-experts don’t follow experts. I argue that the main concern is not that non-experts don’t defer to or trust experts. It is that non-experts are not relying on the right experts, or at least, the right aspects of a skilled actor’s performance. When someone takes their physician’s advice about avoiding a COVID-19 vaccine based on their physician’s political alignment, they are still deferring to an expert. However, the standards of expertise differ from those that justify deferring to a White House COVID-19 expert. If the standards of expertise shift, then our justification for following expert guidance must shift accordingly. And if our standards of expertise are unrepresentative of an actor’s level of skill, then it is worth either rethinking those standards or the relation we ought to bear to those experts.
10.00 | Expertise and the Service Conception of Authority
Armin Mašala | University of Zurich
Joseph Raz’s service conception of authority has been subject to a lot of debate in political philosophy. I add to this debate by arguing that Raz’s service conception is insufficient to explain a particular case of illegitimate authority, which is exemplified by a case study of the history of Western European deaf education. I propose that we ought to add a further condition to the service conception to account for this insufficiency, the improvement condition. It states that authority is legitimate only if it makes efforts to improve its legitimacy. Furthermore, I argue that the improvement condition allows us to capture important intuitions about the role of expertise in legitimizing practical authority, by showing the importance of expertise in authoritative policy decisions. However, the improvement condition simultaneously limits the role of experts in authoritative deliberation.
10.30 | Non-Epistemic Values and Expert Testimony
Jörn Wiengarn | Technical University Darmstadt
The role of non-epistemic values in science is still a controversial topic in the philosophy of science. More recently, scholars such as Gregor Betz and Tobias Henschen voiced concern about the influence of non-epistemic values in science and put forward new arguments in defence of the value-free ideal. The aim of this paper is to respond to their concern while drawing some lessons from this debate on the nature of expert testimony. To this end, I firstly reconstruct the more recent developments within the debate on values in science. More precisely, I highlight that the core of the debate revolves around the question of whether scientists, in their role as policy advisors who provide expert testimony, are irreducibly responsible for non-epistemic values. In the second step, I turn to the phenomenon of expert testimony, which has so far received little attention in the philosophical literature. Roughly speaking, I argue for an understanding of expert testimony as a form of complexity reduction. This will provide a background to respond to the defenders of the value-free ideal. Accordingly, I aim to show in the third step where the challenges lie in thinking of expert testimony as a value-neutral way of communication.
Room: SCIH 1.12
Chair: Silvia Caprioglio Panizza | University of Pardubice
9.30 | Can It Be Rational to Distrust Experts?
Leon Assaad | LMU Munich
Why do so many people not believe in anthropogenic climate change, and why has trust in government institutions eroded during the pandemic? It is tempting to believe that distrust in expert sources (both institutions and individuals) is grounded in irrationality. This paper argues that even rational agents can come to distrust expert sources under certain conditions. To this end, a series of Bayesian models (based on Bovens and Hartmann 2003 and Duijf 2021) are developed and analyzed. These models illuminate a paradigmatic situation in which a rational layperson receives a statement about a hypothesis from a single expert or a group of experts. The models show that trust can easily erode under three conditions: a. Inconsistency of the source, b. Perception of a common motivation among multiple sources, and c. Contradiction of the agent’s firmly held beliefs. First, if the source repeatedly disseminates inconsistent information, rational agents will slowly but surely lose trust — a finding consistent with an observed loss of trust in institutions disseminat- ing information about COVID-19 (Magarini et al. 2021). Second, if a rational layperson receives testimony about a hypothesis from multiple experts, they are more likely to lose trust if they per- ceive the sources to share a common incentive or motivation. This finding is consistent with polls about distrust in climate scientists, who are often perceived to operate on shared political convictions (Funk and Kennedy 2016). Third, if a layperson’s firmly held beliefs are contradicted by an expert source, then they have reason to decrease their trust in said source. To reconstruct this phenomenon, one need not assume that the layperson is biased, or otherwise irrational. These models show that distrust in experts need not be the result of irrationality, and therefore highlight the importance of remedying the identified problems in science communication, while urging us to reevaluate common opinions about the origins of anti-scientific beliefs.
References
Bovens, Luc, and Stephan Hartmann. 2003. Bayesian epistemology. OUP Oxford.
10.00 | Hermeneutical Delusions as a Source of Distrust
Gonçalo Marcelo | CECH, Universidade de Coimbra
This paper aims to analyze contemporary sources of intersubjective and institutional distrust, such as weird beliefs, conspiracy theories and fake news, proposing to conceptualize them as “hermeneutical delusions”. I propose to define these delusions as being interpretive (rather than merely cognitive) in nature, that is, led by a certain framework of interpretation. They are fostered by epistemic unsound environments and often tied to reasons related to ideology and identity. The paper starts with an account of trust and recognition as a precondition for the intersubjective establishment of the social bond, arguing that a default position of distrust in others is ultimately a case of misrecognition, whereas the same attitude towards institutions entails the dissolution of the legitimacy granted to a given social order. Looking for the sources of this phenomena in contemporary Western democracies, the paper moves to an account of ideology as a specific case of a collective system of beliefs. Distinguishing, following Paul Ricœur, between ideology as a constitutive function of our social imaginaries, and pathological ideologies as a source of distortion, the paper argues that seemingly disparate phenomena such as science denialism, political polarization and conspiracy theories all have roots in hermeneutical delusions. Finally, the paper will provide some more detail on the epistemic account of these phenomena, namely the adhesion to core beliefs (which might variously be described as “hinge propositions” in a Wittgensteinian vein) and their resistance to revision in light of evidence or expert opinion, and elaborate on the possibility of non-paternalistic democratic dialogue to address the problems that these phenomena pose to public policy and democratic governance alike.
10.30 | Contagions of Distrust: The Epistemic Vulnerability of Misinformation Groups
Cayla Clinkenbeard | New School for Social Research
How should we understand the epistemic vulnerability of Q-anon, climate change deniers, and anti-vaccination groups? The echo chamber is the dominant metaphor to describe epistemically pernicious communities. An echo chamber is an insular community in which only members’ voices reverberate, giving them the impression that their testimonies are epistemically stronger than they really are. Nguyen (2020) characterizes the perniciousness of echo chambers as a manipulation of trust: they isolate members from relevant information by systematically discrediting outsiders and inflating members’ credibility. However, this diagnosis fails to distinguish epistemically healthy and unhealthy communities and invites a misleading conception of the boundaries of such groups. Instead, I argue that contagion is a more accurate metaphor to understand the vulnerability to misinformation that motivates the echo chamber view, and epistemic nihilism is the best criterion for identifying perniciousness. An epistemic community functions like a network of trust, with members who rely on each other to assess the credibility of information and informants. So long as at least one member belongs to at least one other network, endorsements can trigger chain reactions of influence between other trust networks, like a contagion. Pernicious networks, such as political and corporate interest groups, design misinformation for the purpose of destroying reasonable beliefs rather than necessarily engendering false beliefs. Epistemic nihilism abandons the standards with which we evaluate purported knowledge and offers no positive alternative. Nihilistic networks mislead by presenting themselves as offering knowledge or reasonable criticism when they really offer only doubt and distrust. One advantage of the nihilistic contagion view is that it is easier to see how pernicious communities are guilty of abusing trust rather than stubbornly holding false beliefs. I also offer some suggestions for how nihilistic contagions require different strategies for combatting such groups than the echo chamber view suggests.
Nguyen, T. (2020). “Echo Chambers and Epistemic Bubbles.” Episteme 17(2): 141-61.
Room: SCIH 0.12
Chair: Melanie Altanian | University College Dublin
9.30 | Epistemic Democracy and Competence
Arshak Balayan | American University of Armenia
Epistemic democracy optimists have argued that despite public’s well recorded ignorance, democratic decision making is not only morally but also epistemically better than alternatives: as a decision making mechanism majority vote through free and fair elections outperforms decisions by expert bodies. There are a number of theoretical explanations on why this might be the case. This paper discusses the explanation through the so-called Diversity Trumps Ability theorem proved by Lu Hong and Scott E. Page. This explanation basically claims that if the problem is complex enough then a large number of diverse and ‘smart’ problem solvers out-performs a small number of individually best problem solvers. This paper closely closely analyzes Page’s writings and the literature it has triggered and assesses its implications for democracy. One conclusion that this explorations leads to is that more inclusiveness and hence bigger diversity of decision makers will hardly suffice. Democracies still need to heavily rely on experts. This is because in addition to well known problems of low motivation of citizens, aggregations of beliefs and preferences etc. we have no reasons to assume that the general public is ‘smart’ in the way Page’s model requires.
10.00 | Trustworthiness is Political
Matt Bennett | University of Essex
It is common for trustworthiness to be treated as a virtue that some public institutions have, and others do not, solely in virtue of other characteristics of the institution. But exclusive attention to the institutional properties that support trustworthiness occludes another significant feature of trustworthiness: trustworthiness is agent-relative, in the sense that whether an institution is trustworthy depends on contingent facts about potential trusters. Sometimes an institution is trustworthy for some but not for others; sometimes, changes to a person can change whether an institution merits their trust. The goal of this paper will be to make the case for thinking that institutional trustworthiness is agent-relative, and that some of its agent-relative features render trustworthiness political. I will argue that the contingencies about trusters that affect an institution’s trustworthiness include whether their values align with those of the institution, and the person’s social status, power, and vulnerability to the institution. I will argue further that these two factors of value-alignment and vulnerability make trustworthiness political, in the sense both that trustworthiness requires sufficient agreement between truster and trusted on what is most important, and that an institution will sometimes fail to be trustworthy because of the social position of the truster.
10.30 | The Uneasy Alliance of Populism and Technocracy: The Case of ANO Movement
Giuseppe Ballacci | University of Minho
Jan Bíba | Charles University
Recently a number of scholars have questioned the traditional opposition that is established between technocracy and populism, arguing that a new form of technocratic populism has emerged in many democracies. Bickerton and Invernizzi Accetti’s recent book Technopopulism represents the most compelling example of such perspective. According to them a new ‘technopopulist’ logic has become structural in contemporary democracies, displacing traditional ideological politics and its pluralist and mediated view of society. In this paper we engage critically with this work to reaffirm the diffences between technocracy and populism. Even if Bickerton and Invernizzi Accetti are right in identifying significant analogies between them, we think they overemphasize their proximity due to an underdeveloped analysis of both populism and technocracy, oblivious of the links between their ideational and stylistic/performative profiles. Our analysis, instead, assesses populism and technocracy by taking this link as central. While technocracy bases legitimacy claims on trust in the rulers’ knowledge and, thus, on the distance between leaders and the people, populism proclaims responsiveness based on the identity of the leader and the people. We suggest that these different legitimacy claims result in different performative styles, making articulation between populism and technocracy inevitably unstable. To prove our argument, we focus on the case of ANO movement, a successful Czech technopopulist party, and its leader Andrej Babiš. During the pandemic, ANO faced a reputational threat due to its inability to provide expert solutions. We show that ANO chose to counter this crisis by abandoning the technocratic style that emphasizes the epistemic difference between leaders and people, and conversely strengthening the populist emphasis on identity and responsiveness to the people. This case, we argue, shows how in moments of crisis technocracy and populism point in opposite directions, which implies that any possible combination among them is intrinsically unstable.
Room: SCIH 1.51
Chair: Maria Baghramian | University College Dublin
9.30 | Youth Activism and the Climate Emergency: The Public Sphere and Vulnerability
Paul Giladi | Manchester Metropolitan University
For Jürgen Habermas (1987, 1989), late capitalism – particularly its unfettered market iterations – sees instrumental reason and action dominate communicative reason and action, to the extent that communicative practices risk being replaced by instrumental vocabularies and activities. The public sphere, defined by Habermas as the principal modern institution of the lifeworld, is in danger of being decomposed (and eventually colonised) in late capitalism.
My aim in this paper is to argue that youth activism about the Climate Emergency is significant for at least two reasons. First, at the macrosociological level, this extraordinary activism may challenge Habermas’s famous worry that late capitalism brings about the ‘decomposition of the public sphere’: for, while it is reasonable to think Habermas’s pessimism in 1962 is prescient about the bourgeois public sphere, youth activism about the Climate Emergency seems to be a paradigm of communicative action, so much so that youth activism either represents an entirely different public sphere free from colonisation, or is a way in which the bourgeois public sphere itself can re-compose and de-feudalise. Second, at the epistemic-social level, this extraordinary activism is important, because it provides one with significant conceptual resources to radically re-think the epistemic status of children as well radically question long-standing assumptions about vulnerability and power: youth activism is a paradigmatic example of Arendtian power and epistemic authoritativeness, in that it is geared towards public trust in expert inquiry, so much so that children are seen and heard as agents occupying an important and vulnerable position in the space of reasons. Children’s vulnerability, however, is not ‘weakness’ or ‘fragility’, but a multidimensional democratic disposition for the need for wider networks of public trust in expert inquiry and care.
10.00 | Feasible & Neutral Science Advice: The Intergovernmental Panel On Climate Change & the 1,5°C Target
Torbjørn Gundersen | University of Oslo
In this paper, I examine how expert bodies can provide feasible policy recommendations in a neutral manner. In order to this, I examine the case of the Intergovernmental Panel on Climate Change’s (IPCC) special report on how governments can limit global warming to 1,5°C degrees above pre-industrial levels. According to its mandate, the IPCC is designed so as to provide expert assessments to governments and other stakeholders in an objective and neutral manner. I will argue that the special report highlights some central challenges for the IPCC in dealing explicitly with political aims and policies in a neutral manner, pertaining the experts’ risk of tacit approval of unrealistic aims and creating public misperceptions of what can be done.
10.30 | Integrating Localized Expert Judgment for Climate Change Policy Making in the Face of Extreme Events
Mason Majszak | University of Bern
This paper investigates a means for providing local information to policy makers for adapting to regional climate change, given concerns of extreme events. Local adaptation poses unique problems as global circulation models often use a gird size to provide a projection that is too large for local needs, and while there are efforts to produce regional models these are not widespread. Additionally, with an increase in extreme events (Rahmstorf and Coumou 2011), public policy may not only be concerned with the projected average climate future but also the possible future extremes. Climate extremes, or the tails of the distribution, have also been subject to model biases, affecting their representation within the projection (Brown et al. 2014). Thus, practical decision making is left with high uncertainty regarding these extremes. I argue that due to the high epistemic uncertainty utilizing expert judgment within the climate science and policy interaction is warranted. Expertise is uniquely positioned to make claims and provide judgments under high uncertainty and overcoming this decision-making problem on a local level. I argue that domain experts, for specific regions, contain tacit knowledge that is difficult to articulate but relevant in complex decision making and forecasting of the specific area. However subjective aspects of the expert, like the values used by an expert, can be implicitly used in the creation of a judgment and are not guaranteed to be democratically endorsed, thus making their use potentially illegitimate (Intemann 2015). I argue expert elicitation protocols can be used to access the dense tacit knowledge while managing the subjectivity of the experts (Lam and Majszak 2022). I conclude that the aggregation of local domain expert judgments should be used for overcoming the high uncertainty in local policy making, while suggesting a proper protocol for managing the non-epistemic values present in the judgments.
Room: SCIH 2.12
Chair: Tiffany Morisseau | LaPEA University Paris Cité
9.30 | Trust as a Cultural Characteristic: The Case of Sweden
Orlaith Rice | University College Dublin
Sweden is a high trust state. Levels of both interpersonal and institutional trust are high by global standards. As seen during the Covid-19 pandemic, Sweden not only had trust in their government but very high levels of trust in the public health experts who were largely in charge of pandemic policy and recommendations. Based on qualitative, empirical data (focus group discussions and semi-structured elite interviews conducted with people in Sweden), this sociolegal conference paper explores the extent to which trust can be embedded in an entire culture or nation. It will also examine the foundations of this trust.
10.00 | Legitimate Distrust and Values in Public Health
Elena Popa | Jagiellonian University
Discussions of distrust in the context of science and expertise have singled out, among others, that unjust institutions or participation in past patterns of oppression warrant distrust, particularly among disadvantaged groups (Scheman 2001; Grasswick 2017). This is in line with work connecting trust with acting as justice requires and exploring the democratic value of distrust (Krishnamurthy 2015). Baghramian and Caprioglio Panizza’s (2022) introduction of the ‘climate of trust’ as a way of countering legitimate distrust and skepticism aims to address these problems by allowing for a fair share of costs and benefits and including all groups concerned. This paper will explore how practices within science itself, of which experts are representatives, can contribute towards a climate of trust. I will argue that in addition to transparency and deliberation about values advocated by Baghramian and Caprioglio Panizza (2022), a commitment to justice can make science more trustworthy. This is necessary because various patterns of oppression, which I will place under the broader label of epistemic injustice lead to scientists neglecting perspectives and interests of disadvantaged groups, generating worse outcomes and justified distrust. I will illustrate this with a case regarding Native-American women of reproductive age in areas affected by mercury pollution where fish is the main source of food (Mansfield 2012). Public health advice focusing on preventing neurological conditions in newborns through the mothers’ individual responsibility of removing contaminated fish from the diet while overlooking the role of the main pollutants can lead to justified distrust. By contrast, taking a value-promoting stance (Russo 2021) and counting social causes as proximate can help pin down socio-economic circumstances regarding one’s ability to avoid or replace certain types food sources and providing alternatives as needed, as well as calling for environmental regulation in the long term.
10.30 | Protecting and Building Public Confidence in Statistics and Evidence
Mary Cowan | Office for Statistics Regulation
The Office for Statistics Regulation (OSR) is responsible for regulating the UK’s official statistics and we have a vision that statistics will serve the public good. We created and maintain the Code of Practice for Statistics, which sets the standards that people who produce official statistics should commit to. The Code contains three pillars: Trustworthiness, Quality and Value. In this talk, we will expand on the Trustworthiness pillar and discuss why we believe that trustworthiness, rather than trust, is a highly valuable construct which helps statistics producers to focus on what matters and build public confidence in their statistics. We will also discuss how this relates to one of our key roles: challenging the misrepresentation of statistics and other forms of evidence in public debate. The issues we investigate are often raised by members of the public and provide insight into factors that promote mistrust and erode public confidence, in addition to how trust and public confidence can be strengthened. In our last point, we will discuss what our research has told us about how public trust in statistics can be improved, for example, through the universal adoption of best practice data safeguarding. This finding comes from our most recent project which explored public perceptions of the public good use of data for research and statistics. We used a public dialogue method to explore this topic with 68 members of the public. This project was part of our wider Research Programme which is dedicated to developing the understanding of public good. This work makes a significant contribution to the evidence base on public good and contains important lessons on maintaining and increasing public trust which are highly relevant to the research and statistical system.
11.00 – 11.30 | Coffee Break | UCD O’Brien Centre for Science
11.30 – 13.00 | Panel A
Room: Moore Auditorium
Chair: Bobby Duffy | King’s College London
James Wright | King’s College London
Kirstie Hewlett | King’s College London
Liam Delaney | London School of Economics
13.00 – 14.00 | Lunch | UCD O’Brien Centre for Science
14:00 – 15.30 | Panel B
Room: Moore Auditorium
Chair: Suzanne Hall | King’s College London
Emma Fletcher-Barnes | University College Dublin
Kirstie Hewlett & Zara Regan | King’s College London
Annie Howitt | Sense About Science
Daniel Kaiser | ALLEA
Liana Simonyan | American University of Armenia
15.30 – 16.00 | Coffee Break | UCD O’Brien Centre for Science
16.00 – 17.30 | Panels C-D: Parallel Sessions Day 1
Room: Moore Auditorium
Chair: Luke Drury | ALLEA
Siobhán O’Sullivan | Royal Irish Academy
Tony Ross-Hellauer | Technical University Graz
Mara Almeida | Enlightenment 2.0, Joint Research Centre (JRC), European Commission
Room: SCIH 1.51
Adam Kelly | University College Dublin
Matt Prout | University of Bristol
Katerina Pavlidi | University College Dublin
17:45 – 18.00 | Welcome Address | Moore Auditorium
Professor Orla Feely | Vice President for Research, Innovation and Impact | Incoming UCD President
18:00 – 19.00 | Roundtable [Online Live Stream]
In cooperation with SAPEA Science Advice for Policy & Royal Irish Academy
Room: Moore Auditorium
Moderator: Alison O’Connor | Journalist & Broadcaster
Dominika Latusek-Jurczak | SAPEA Working Group
Alberto Melloni | Chief Scientific Advisor to the European Commission
Pete Lunn | Economic and Social Research Institute, Ireland
Denis Naughten | Teachta Dála, Galway/Roscommon
19.00 – 20.00 | Reception | UCD O’Brien Centre for Science
Hosted by SAPEA and the Royal Irish Academy
9.00 – 11.00 | Paper Presentations: Parallel Sessions Day 2
Room: SCIE 1.19
Chair: Arshak Balayan | American University of Armenia
9.00 | Taking Advice or ‘Sticking to Their Guns?’: Future Policy Makers’ Consideration of Expert Advice in Policy Scenario Exercises
Laura Prokic & Laura Simonyan | American University of Armenia
Does the next generation of leaders and policymakers trust experts? Using a sample of 21 undergraduate and graduate students enrolled in political science programs at the American University of Armenia, this study examines whether students with a special interest in public decision making consider advice from experts when making hypothetical policy decisions in the Armenian context. Participants were first surveyed about their positions on key policy issues. Then, using carefully constructed policy scenario exercises, participants were exposed to scenarios related to these issues with written advice from experts and non-experts and asked to make hypothetical policy decisions. Participants then explained how they reached their policy decisions to determine whether the expert advice was considered. Focus groups were later conducted to gain insights into participants’ decision-making processes and their sentiments towards experts and expert advice. Results indicate that participants rarely considered expert advice when making policy decisions, especially when the expert-recommended course of action did not match their personal beliefs. Participants focused on normative concerns and value judgments, balancing between competing interests, the development of their country, and pragmatism when making hypothetical policy decisions. Understanding these students’ current level of trust towards expert advice offers important insight into how future generations may approach the policymaking process in Armenia. Results may also identify opportunities to strengthen students’ consideration and incorporation of expert advice in policy decisions. The policy scenario exercises are also useful for examining the decision-making strategies of policy makers.
9.30 | Disagreement and Epistemic Injustice
Melanie Altanian | University College Dublin
When it comes to issues relevant for public policy, such as social injustice, anthropogenic climate change, or public health – those who do not believe in some expert consensus view on the issues at hand often seek to frame them as matters of valid disagreement. Those who defy the consensus view seem to support their contrarian beliefs usually based on comparatively excessive credibility given to minority expert testimony – or at least, testimony of those who are perceived to have expertise – while they deflate the credibility of testimony supporting the consensus view. By creating a “false balance”, the issue at hand can be framed as proper testimonial disagreement, as expressions of two different, valid opinions on a subject matter. However, such differential assessments of credibility or epistemically deficient treatments of testimony more generally can have deeper social structural roots, which matter especially when it comes to policy issues concerning social justice. They can then give rises to distinctly epistemic wrongs. In this paper, I address the relationship between disagreement and epistemic injustice and investigate the conditions under which disagreement perpetuates and maintains epistemic injustice. I focus on one form of epistemic injustice in particular, namely willful hermeneutical ignorance and hence, cases of harmful disagreement manufactured in order to obscure expert consensus, or entrench confusion in order to maintain an unjust status quo.
10.00 | Trust, Moral Expertise, and Sports Policy
John William Devine | Swansea University
Sporting competition is founded on a bedrock of trust. From doping, match-fixing, and competition classification (e.g. according to age, gender, disability), the integrity of competition rests on trust-based rules which can be flouted with relative impunity by athletes. This poses a challenge to public trust in sport: can the public rationally believe in the achievements of their sporting icons? The existence of trust-based rules places a premium on the trustworthiness of athletes. Indeed, the existence of such rules implies that the integrity of sporting competition hinges on the trustworthiness of athletes. The moral significance of trust-based rules has not yet been reflected in the growing applied ethics literature on sports policy. I contend that efforts to secure compliance with trust-based rules should shift from an incentive-based model to one that aims to cultivate the relevant kind of trustworthiness among those in whom trust is placed in sport. The trustworthiness required to sustain the integrity of sporting competition has motivational and competency components. Firstly, athletes must internalise norms that motivate them to act in ways that reliably sustain competition integrity. Secondly, athletes must acquire expertise – including a specific form of moral expertise – to achieve the competency necessary for trustworthiness in sport. I argue that policymakers in sport should recognise the need for, and institute mechanisms by which to cultivate, the moral expertise made necessary by sport’s dependence on trust-based rules. Such policy intervention is necessary for athletes to acquire and sustain the trustworthiness for sport.
10.30 | Epistemic Gatekeeping for Public Debates: A Dynamic Approach
Maurizio Mascitti & Luca Ausili | University Vita-Salute San Raffaele
Unrestrained and sloppy selection of trustworthy contributors to public debates might have unpalatable consequences, e.g. by validating potentially dangerous beliefs and behaviors in contexts of high emotional saliency. However, in democratic societies, public debate management lacks distinct criteria capable of cutting down those risks. In this work, we propose a set of dynamic epistemic criteria to select experts and make the most out of their advice. Our gatekeeping design is dynamic since those who are in charge of applying the aforementioned criteria – the gatekeepers – should constantly refer to them to modulate experts’ participation in debates of high socio-epistemic relevance. First, we refer to standard criteria for the attribution of expertise in the literature (Origgi 2022). Since expertise can either be ‘eroded’ or refined in time (Collins & Evans 2007), inclusion and exclusion to specific debates are not definitive: experts can flexibly enter and exit public arenas depending on the evolution both of their competence and of the state of affairs. Second, our criteria are conceived as a tool for gatekeepers to manage the subsequent contribution of selected experts throughout the evolution of the debate. For instance, one of our criteria requires addressing domain-specific questions to domain-specific experts to reduce suboptimal use of competences and the risk of epistemic trespassing (Ballantyne, 2019), while another separate set of criteria demands to spot and avoid faulty use of evidence in experts’ argumentation. We believe that public communication should be conceived as expert communication only when we have good reasons, enclosed in our criteria, to consider their source sufficiently reliable in that field. Our proposal can find practical implementation in guidelines for editors and journalists, training programs for specialized mediators between experts and the public, as well as for experts themselves to improve their epistemic contribution to public debates (Martini et al., 2022).
Room: SCIH 0.12
Chair: Torbjørn Gundersen | University of Oslo
9.00 | Relying on the Majority
Paride Del Grosso | Antwerp University
The purpose of this paper is to provide a possible answer to the question “When experts disagree with each other, how should non-experts decide who to believe?. I will consider a simplified model in which a layperson C is in a situation of epistemic impasse, i.e. C has to decide whether to believe in statement p (e.g. “cure x is effective at fighting cancer”), which is supported by expert A, or in non- p (e.g. “cure x is not effective at fighting cancer), which is supported by expert B. Moreover, both A and B claim that there is enough evidence to support their opposite positions. I will argue that, if C is in an active member of a well-established democracy (prerequisite I) and if C can be influenced by both epistemic and extra-epistemic factors (prerequisite II), then it is reasonable for C to take into consideration the choice of her peers in order to decide which expert to believe in. In particular, I will argue that the most rational choice for C to is to rely on the decision made by the majority of her peers. In fact, in a well-established democracy, the majority: (I) is collective wisdom; (II) guarantees the intrinsic quality of the decision; (III) represents a set of shared values that are the right path to follow when C is in a situation of epistemic impasse. I will challenge my argument by presenting an objection. This objection rejects the idea that relying on the majority is a solution to the epistemic impasse and states that relying on the majority is dangerous for the democracy itself. In conclusion, I will reply to this objection and I will show that it does not undermine my argument.
Key words: Expert disagreement, Epistemic impasse, Majority, Democracy, Values
9.30 | In Their Best Interest – Expert-Involvement, Ignorance and Division of Epistemic Labor
Sonja Riegler | University of Vienna
Epistemic paternalism (EP) concerns practices through which agents interfere with other agents’ inquiry for their epistemic good (Ahlstrom-Vij, 2013a). In this paper, I defend expert-involvement in political decision-making processes as a permissible form of EP. And yet, I also address pitfalls arising in the context of this form of EP and propose three solutions. My argument consists of two parts. First, I identify two social processes leading up to EP, namely (1) division of epistemic labor and accompanying (2) attributions of knowledge and ignorance. Doing so shows why epistemic interference should be understood as a natural feature of our epistemic lives. People are in different places at different times and consequently have access to different forms of information. Their ignorance is not simply a lack of knowledge or competence due to failings in in inquiry. Rather, people often lack the time or (epistemic) resources to pursue certain forms of inquiry in the first place. Many laypeople thus rely on the relevant experts to interfere in their inquiry. And yet, division of epistemic labor often mirrors unequal power structures. In the second part of my paper, I therefore clarify how to prevent pitfalls arising in the context of EP through expert-involvement. I propose three broad solutions: (1) We need to make sure that a division of labor and the related EP is politically and democratically just: Everybody should have access to the relevant epistemic resources. Moreover, (2) we need to communicate that people have expertise due to systematic training (e.g., learning to discern real from fake experts). Finally, (3) we need to recognize that people have expertise through experience and ensure that EP is also based on interferences from those who are normally not regarded as experts. My paper therefore closes with a section on how to integrate revised accounts of expertise, such as what I label “standpoint expertise”, into forms of EP. I believe that doing so a) ameliorates epistemically paternalistic practices and b) enhances public trust in expert- involvement in political decision-making processes.
10.00 | On the Conditions of AI-Expertise
W. Jared Parmer, Hendrik Kempt & Nils Freyer | RWTH Aachen University
Explainability of the decision-making of an autonomous algorithm (AI) has gained the attention of philosophers in a number of applied ethical fields, as well as in decision theory and law. One rarely discussed but more implicit assumption is the positive role of explainability for the trustworthiness of such algorithms. Our thesis is that, while explainability can be helpful in increasing the trustworthiness of an AI, it is by no means necessary. We can appropriately trust unexplainable AI. To argue for this thesis, we first analyze why we are justified in trusting human experts’ judgments above our own. One of the main features is the recognition of experts as experts in some relevant domain. This recognition is justified in virtue of either their having an excellent track record, i.e., by their functional reliability, or in virtue of their being recognized by a network of experts and their organizations, i.e., by their institutional recognition. Second, we contrast this with some of the common arguments for explainability as necessary for trustworthy AI. Third, having carved out the difference, we ask why AI cannot be granted the same trustworthiness we grant human experts based on their functional reliability and institutional recognition. We argue that this additional requirement for AI, explainability, is an unjustified double standard. We do not require human experts to explain their judgments to us, nor do we expect them to be able to – after all, understanding their judgment and its basis often requires the very expertise we lack. Rather, they are trustworthy in virtue of their successful track-record and recognition by other experts as such. In this sense, those of whom we do not require explanations are the ones we trust. It appears that AI, if both functionally reliable and institutionally recognized, need not be explainable in the demanding sense currently discussed to be appropriately trusted – nor, indeed, to be recognized as
an expert.
10.30 | Trust, Trustworthiness, and the Directed Responsibilities of Expert Testifiers
Benjamin Clarke | University of Edinburgh
Most philosophers of trust now hold that whenever we trust a person to do something, we take the participant stance towards that person: we treat her as someone who is responsible for her actions, and we are prepared to hold her accountable through various accountability-seeking responses if she fails to do as trusted. Typical examples of such trust-related, accountability-seeking responses include: demands for personal apology (i.e., demands that the trusted person apologise not just to anyone for failing to do as trusted, but specifically to the trustor herself), and resentment (i.e., the reactive attitude expressing the trustor’s belief that the trusted person has specifically wronged her, the trustor, by failing to do as trusted, and not just anyone). Since personal accountability-seeking responses of this kind always seem appropriate in cases of misplaced trust, I argue that trust must involve the belief that the trusted party has a directed responsibility to the trustor to do something, and thus that the trustor is entitled to hold the trusted party personally accountable for failing to fulfil that responsibility. Accordingly, I argue that a person is trustworthy in a given situation only if she fulfils her directed responsibilities to others in that situation. After motivating this directed-responsibility theory of trustworthiness, I explore the directed responsibilities of expert testifiers: speakers who attempt to tell audiences things that require specialist training and knowledge in order to determine their truth. More precisely, I explore the responsibilities that expert testifiers have towards their non-expert audiences. I argue that these responsibilities include: (i) Sincerity: aiming to tell the truth, (ii) Competence: exercising their expertise appropriately when determining the truth of their telling, and (iii) Clarity: telling things in a way that a layperson will understand. I conclude that a trustworthy expert testifier is one who fulfils these directed responsibilities.
Room: SCIH 1.51
Chair: Christopher Cowley | University College Dublin
9.00 | Conspiracy Theories, Distrust, and One Possibility Too Many
Silvia Caprioglio Panizza | Univerzita Pardubice
When trust is present, something else is absent: recurrent doubt or continuous checking for signs that the other will do as expected are signs of lack of trust. Focusing on this aspect of trust, C. Thi Nguyen has proposed to understand it as an ‘unquestioning attitude’. In this paper, I draw no Nguyen’s contribution to extend the understanding of trust in negative terms – in terms of what we do not do – and apply this account of trust to conspiracy theorists, who on this view express distrust in expert-endorsed facts and positions by entertaining ‘one possibility too many’. In the first part of this paper I consider the meaning and role of ‘possibility’ that is relevant for trust, its first personal nature, and the moral psychology of possibility in the moral domain. I then go on to apply the concept developed to different instances of trust, private and public, and observe how in all cases trust excludes, to varying degrees, certain possibilities from playing a role in the mind of the trustor. In the final section I consider the moral psychology of conspiracy theorists by looking at how the erosion of public trust operates through the introduction of heretofore unconsidered possibilities and the proliferation of ungrounded possibilities.
9.30 | Distrusting Consensus
Jaron Harambam | University of Amsterdam
The corona pandemic brought about great challenges for institutions and citizens alike. These go beyond public health issues alone, but relate to the information flows about what is going on as well. Experts in media, politics and science rapidly produced and disseminated reliable knowledge about the virus and how to curb its spread. While the conventional “institutional model” of science communication did its job, and those public institutions garnered higher levels of trust following “rally-around-the-flag” dynamics, other people would grow a distrust towards these institutions and the emerging orthodox corona narrative. Such distrusts are often framed as conspiracy theories, and today’s globalized media eco-systems enables their proliferation. This so-called “infodemic” became a prime object of concern. In this article I agnostically study those distrusts from a cultural sociological perspective to better understand how and why people (came to) disbelieve official knowledge and their producers. To do so, I draw on my ongoing ethnographic fieldwork in the off- and online worlds of people labeled as conspiracy theorists in the Netherlands, including the media they consume. Based on an inductive analysis of people’s own explanations and sense-makings, I present three dominant reasons why they started to distrust the emerging corona narrative: media’s panicky narrative of fear and mayhem; governments sole focus on lockdowns and vaccines; and the exclusion of heterodox scientific perspectives in the public sphere. Each of these reasons problematize a perceived orthodoxy in media, politics and science, and this uniformity bred suspicion about possible conspiracies between these public institutions. These findings have important policy implications for the way public institutions communicate about complex and controversial societal problems: keeping things simple and clear is important, but downplaying uncertainty, critique and diversity may have undesired unanticipated consequences in the form of societal opposition as (too much) consensus is easily distrusted.
10.00 | The Ethics of Expert Communication
Hugh Desmond | Leibniz University Hannover
What does it mean for a scientific expert to communicate honestly? Does honest communication mean an exhaustive chronicling of the extant scientific studies, including those that conflict with each other? Or does honest communication entail focusing on the “main message”, such as the areas of consensus? In enquiring about the meaning of “honesty” in expert communication, we cannot ignore the question of what information to foreground and what to background in a communication. This is the issue of framing (see e.g., Bolsen & Druckman, 2018, Chinn & Hart, 2021). In this paper I seek to demonstrate this ethical nature of expert communication by bringing attention to the ethical dimension of framing. Framing can be done in a misleading, biased, or even manipulative way. Based on such considerations I argue that expert communication is an intrinsically ethical activity. By an “ethical activity” I mean that the act of expert communication is the outcome of a process of individual deliberation which is (and must be) guided by ethical values. And by “intrinsically ethical” I mean that the act of expert communication is ethical in virtue of the nature of expert communication, and so is always an ethical activity. In other words, expert communication is not a value-neutral activity. In the full paper I argue for these various claims. An important part of the significance of the contribution lies in how, in current documents of scientific integrity (e.g. ALLEA, 2017; Desmond & Dierickx, 2021), there is typically little to no acknowledgment of the difficulties of how to decide on framing, and of how this complicates what “honest” communication mean. Expert communication is largely viewed as a technical, value-neutral activity lacking in any ethical or moral-normative valence Gerken 2018). However, by not acknowledging the ethical nature of expert communication, such documents fail to give proper guidance to researchers on how to communicate to industry or policymakers. The paper thus ends with connecting the philosophical analysis of expert communication to practical recommendations.
10.30 | An Analysis of Bias and Distrust in Social Hinge Epistemology
Anna Pederneschi | University of California Irvine
The pervasiveness of trust in our everyday lives makes it a collective “bonding agent” that allows us to participate in epistemic practices. Distrust, on the other hand, can be quite damaging. Philosophical literature has focused on trust, considering distrust as a mere afterthought. However, I think understanding the rationality of distrust is crucial for our testimonial practices. While the discussion of this topic may be still at its beginnings, it poses interesting questions in the domain of social epistemology. In other words, we can ask: what happens to our epistemic practices when an attitude of distrust sets in? When is a distrusting attitude warranted and rational? Moreover, what consequences would that have for a subject who adopts such a stance? My general aim is to show that motivated distrust is irrational and based on negative identity bias. Firstly, I will adopt Annalisa Coliva’s account of social hinge epistemology and of hinge trust as the basic stance for our epistemic practices. Secondly, I will focus on how distrust based on negative identity bias can spread across other domains of interaction and jeopardize the practice itself. Thirdly, I will adopt an account of bias as a defeater to enforce the claim that motivated distrust is irrational. Finally, I will give a tentative solution to block the extension of motivated distrust: I will propose that instances of unmotivated distrust in testimonial practices carry the right rational standing insofar as they make the bias evident to the hearer. By sketching this possible solution, I will also highlight the main difference between motivated and unmotivated distrust in testimonial practices, such that unmotivated distrust is only allowed within a particular domain of interaction where no testimonial knowledge or justified belief is transmitted in the first place.
Room: SCIH 1.12
Chair: Maria Baghramian | University College Dublin
9.00 | One Foot in the EULAs: Consent and Trust in Tech Amongst Older Users
Sara Singleton and Phil J Ryan | TASC & SMARTlab UCD
Using the implementation of the General Data Protection Regulation (GDPR) since 2018, this paper examines the development of the concept of consent and by connection trust in online services and linked tech devices amongst older cohorts. It proposes that consent in its current understanding has an asymmetry of power towards technology companies, being a static agreement to be entered and only practically adaptable by the company. This is embodied in the End User License Agreements (EULA) which consistently create arguably coerced trust in users, as they ignore the exact terms of their agreement with the company.
While a move in a welcome direction, GDPR should be understood as a first step in a larger debate around better design of policy around technologies that have outstripped common understandings of the societies they affect. Focusing on the concept of trust in the relationship between older users and technology, the difference between the understanding and realities of EULAs are highlighted. Through analysis of the literature, typologies of trust particular to older users’ experiences of technology are proposed. Of particular interest is the interaction of these trusts with GDPR and user consent. Using Inclusive Design methodologies an argument is made that a life cycle of the capability to consent exist and a dynamic consent built on better understandings of the nature of trust in technology could lead to improved user experiences for all.
9.30 | Echo Chambers, Empirical Underdetermination, and Epistemic Panic
Deven Burks | University of Luxembourg
Researchers, politicians, and citizens alike share the sentiment that echo chambers are a potent driver of negative societal outcomes like polarization and distrust. Echo chambers differ from related psychological phenomena and social-epistemic structures insofar as they distort interpersonal trust. On one prominent rendering, the echo chambers’ core property is precisely the manipulation of in-group and out-group epistemic credentials and trustworthiness via a “disagreement-reinforcement mechanism” (Nguyen 2020: 147). However persuasive and worrying this rendering may be, it is an open question whether there exists an empirical proxy for this mechanism. I argue that, until a proxy has been identified, the political psychology underlying echo chambers remains empirically underdetermined, meaning that researchers, politicians, and citizens alike ought temporarily to suspend judgment on the causal connection between echo chambers and societal outcomes in order to avert a deepening epistemic panic with its own consequences for trust. My argument proceeds in four steps. First, I break down the argumentative logic connecting echo chambers and societal outcomes into its component empirical and normative claims. Notably, the adequacy of the normative claim does not permit reliable inference to the adequacy of the empirical claim or vice versa. Second, I restate the outlines of Nguyen’s (2020) account of echo chambers and the disagreement-reinforcement mechanism. While this account may establish the normative claim necessary for postulating a causal connection between echo chambers and societal outcomes, it does not establish the necessary empirical claim. Third, I consider three candidates for the disagreement-reinforcement mechanism’s empirical proxy which might suffice for the empirical claim: backfire effects, false flags, depth of treatment. Ultimately, I suggest that each proves an unsatisfactory proxy for the mechanism. Fourth, I advise that we suspend judgment on the matter lest current discourse around echo chambers stoke an epistemic panic wherein mistaken public perceptions produce unexpected ethical-epistemic harms.
10.00 | The Normative Need for Trust in AI
Sara Blanco | University of Tübingen
Trusting relationships are a necessary part of everyday life. Generally, trust is placed in people to whom we believe we can delegate something. More recently, both the general public and a part of academia also talk about trust and distrust in artificial intelligence (AI). However, it is unclear whether the concept of trust can suitably be extended to describe relationships between humans and other entities. In this article, I argue that trust relationships imply moral responsibility and it is precisely this implication that makes trust a normative goal for human- AI relationships. Nowadays, AI is used for tasks that involve high-stakes situations that deeply affect people’s lives, such as medical diagnosis or sentence attribution. In these cases, only systems that are worthy of trust should be used. Nevertheless, some authors consider that trust can exclusively be directed towards humans and conclude that AI cannot be trusted, but can only be relied on. Indeed, we rely on others when we expect them to behave as we want, regardless of what motivates their motives. But if reliance was all there is between humans and AI, the emphasis should rather be put on the technical capacities and the success rate of an AI. In contrast, I defend that there are other factors besides predictive success that should shape human-AI relationships, such as responsibility or transparency. Therefore, reliance fails to capture the relationship between humans and artificial systems. In this article, I analyse trust as a relational concept that implies moral responsibility. Trust refers to a relationship in which a trustor willingly makes themselves vulnerable towards a trustee and accepts the risk of being betrayed. I argue that willingly making oneself vulnerable towards someone implies placing moral responsibility on that someone. Conversely, committing to not abuse the trustor’s vulnerability implies that the trustee assumes moral responsibility. In particular, this kind of responsibility attribution becomes normatively necessary when an AI is used in high-stakes situations. On this basis, I conjecture that we should aim for trust, understood in all its moral richness, when shaping human-Al relationships .
Room: SCIH 2.12
Chair: Tiffany Morisseau | LaPEA University Paris Cité
9.00 | ‘Follow the Science’: Popular Trust in Scientific Experts during the Coronavirus Pandemic
Ben Seyd | University of Kent
The coronavirus pandemic increased the public and policy roles played by scientific advisers. This has stimulated scholarly and policy attention on the reasons for public trust in scientists. While previous studies have explored a variety of factors associated with people’s trust in scientists, few have explored the effects of multiple factors considered alongside each other. Moreover, scientists’ prominence during the coronavirus pandemic suggests that public trust might rest not only on scientists’ general features or traits, but also on the way they behave and on their role within the decision-making process. In this study, we explore the determinants of people’s trust in scientists across multiple criteria, encompassing scientists’ traits, behaviours and roles. Our analysis draws on data from specifically-designed surveys and conjoint experiments fielded on samples of the US and British populations.
9.30 | Deliberative Engagement and Trust(Worthiness): From Vaccines to Genome Editing and Back Again
Oliver Feeney | University of Tübingen
The rise in anti-vaccine beliefs is embedded in a notable erosion of trust in established sources of science and medical expertise over the last number of years (ALLEA 2018). Conversely, in the context of human genome editing, there seems a more promising potential in the fostering of trust and trustworthiness with an emphasis on the involvement of wider society through ‘public empowerment’ approaches (WHO 2021; Baylis 2019). I explore the generation and rebuilding of trust and trustworthiness in relation to these two important contemporary societal phenomena – namely human germline (or heritable) genome editing technologies and anti-vaccine beliefs – and to see if there can be lessons learned from each other (Feeney 2022). In the course of this exploration, I argue that – on the face of it – discussions of genome editing can be described as dealing with the idea of a generation of trust, while discussions of vaccines and vaccine hesitancy can be described as dealing with the rebuilding of trust. I argue that there is a pro tanto distinction between the generation of trust in genome editing and the rebuilding of trust in countering vaccine hesitancy or anti-vaccine beliefs. This may suggest that the form of deliberative engagement that is suitable for one context (genome editing) might not be suitable for the other (vaccines) – and vice versa. I consider how strong this distinction is, with some critical comments regarding the ideal versus the reality of public engagement in such contexts of (re)building trust. To highlight this, I argue that deliberative engagement for the context of genome editing (including the focus of the WHO 2021) takes insufficient account of the increasing social and political polarisation and tribalism currently facing science (Collins, 2021), something more evident in the context of vaccines. I conclude that some lessons can be learned from comparing both contexts.
10.00 | How Can Novices Trust Science When Scientific Research Is Conducted in Large Epistemically Distributed Collectives?
Elinor Clark | Leibniz University
Public trust in science is vital for a well-functioning democratic society. But how do we work out who to trust in the modern scientific context? I argue that, rather than, as often seems to be assumed in the current literature on second-order criteria, considering the one-one case of one novice assessing one scientist, the reality is much messier, involving large collectives of scientists working in complex and interweaving collaborations (Kukla 2012; Huebner et al. 2017; Dang 2019). I therefore hope to shift the question away from the assessment of individual scientists to: how can a non-expert assess whether to trust the statements of groups of scientists and identify trustworthy scientific collectives? To do this, I will outline ways which have already been developed for novices to determine whether to trust individual scientists, and consider if these methods could also be applied to groups of scientists. Surveying two sets of second-order criteria proposed by Anderson (2011) and Goldman (2001), I combine insights from these lists and their ensuing criticisms to compile a set of criteria for warranted trust. I then consider more generally what it means to trust a group, arguing that there seems to be no in-principle reason why we cannot also apply this concept to scientific collectives as well as individual scientists. With this in mind, I work through ways these criteria may be applied to scientific collectives as well as individuals, drawing on work about collective knowledge to inform these proposals. However, each of these approaches face serious challenges which limit the ability of the novice to apply them successfully. I end by considering some suggestions for how the novice may be assisted in this challenging but important task of picking out trustworthy scientific collectives, situating the novice, as well as the expert, within their wider epistemic environment.
10.30 | An Infrastructural Account of Public Trust in Science
Gabriele Contessa | Carleton University
Over the last couple of decades, academics and commentators have become increasingly concerned about the (allegedly) low level of public trust in science. In order to address this concern, we need a satisfactory account of public trust in science. Unfortunately, the dominant individualistic approach to public trust in science is woefully inadequate. In this paper, I briefly summarize the main shortcomings of the individualistic approach and sketch a version of the collectivist approach, which I call the infrastructural account. While the individualistic approach maintains that it is primarily individuals who (dis)trust science and that social groups and societies only (dis)trust science insofar as their members do, the collectivist approach, which I favour, claims that it is primarily societies that (dis)trust science. On this approach, a society trusts science to the extent to which it collectively defers to the relevant scientific communities on scientific issues. However, while, in theory complete collective deference to science would enable a perfectly efficient division of epistemic labour, there are limits to the extent to which absolute collective trust in science is either attainable or desirable in our current circumstances. According to the infrastructural account, low trust in science is not so much a result of individual epistemic failings but of a dysfunctional socio-epistemic infrastructure (i.e., the system of social norms, practices, and institutions that promote the reliable production, transmission, reception, and uptake of information). In particular, I argue that the socio-epistemic infrastructure of liberal democracies is both poorly designed and poorly maintained and that the resulting mistrust is often practical rather than, as it is usually assumed, epistemic. In order to improve public trust in science, we, therefore, need to map the relevant aspects of the socio-epistemic infrastructure of society and improve them (or even redesign them when needed).
11.00 – 11.30 | Coffee Break | UCD O’Brien Centre for Science
11.30 – 13.00 | Panels E–F: Parallel Sessions Day 2
Room: Moore Auditorium
Chair: David Farrell | University College Dublin
Suzanne Hall | King’s College London
Jane Suiter | Dublin City University
Diarmuid Torney | Dublin City University
Room: SCIH 1.51
Chair: Eleonora Montuschi | University of Venice Ca’ Foscari
Roberto Gronda | University of Pisa
Magda Roszczynska | University of Warsaw
Michael O’Grady | University College Dublin
13.00 – 14.00 | Lunch | UCD O’Brien Centre for Science
14:00 – 15.30 | Panels G–H: Parallel Sessions Day 2
Room: Moore Auditorium
Susan Owens | University of Cambridge
Cathrine Holst | University of Oslo
Miranda Schreurs | University of Munich
Room: SCIH 1.51
Rowland Stout | University College Dublin
Danielle Petherbridge | University College Dublin
Ian James Kidd | University of Nottingham
Silvia Ivani | University College Dublin
15.30 – 16.00 | Coffee Break | UCD O’Brien Centre for Science
16.00 – 17.00 | Panel I
Room: Moore Auditorium
Torsten Wilholt | Leibniz University Hannover
Hanna Hilligardt | Leibniz University Hannover
17.30 | Bar Open | Common Room, UCD Club
19:30 – 21.30 | Conference Dinner | UCD Club
9.00 – 11.00 | Paper Presentations: Parallel Sessions Day 3 | UCD O’Brien Centre for Science
Room: SCIE 1.19
Chair: Melanie Altanian | University College Dublin
9.00 | Trust, Reliability, and Meliorism
Manuel Liz & Margarita Vázquez | University of La Laguna
Epistemic trust in people and institutions is difficult to understand. Beyond assumptions such as sincerity or adequate transmission of justification, we will defend the importance of a factor that can be described as (M) meliorist assumptions in contexts of asymmetric personal relations. They consist of expectations of obtaining a better content. Our argument is the following. 1. The failure of trust in people and institutions produces a kind of deception that cannot be explained by simple reliabilism. Perhaps, a sophisticated reliabilism like the two-levels virtue epistemology of Sosa’s could explain it by appealing to the role of reflection or, in a telic version, appealing to the notions of aptness and meta-aptness. Nevertheless, other important problems arise. 2., Sosa’s approach takes us from a purely Pyrrhonic problematic to an Anscombian one. It forces us to consider 1) problems of double effects, 2) problems of perverse effects, and 3) problems generated by essential by-products of the action. 3. We will discuss some concrete cases. In 1, the tolerance to counterevidence generated in trust. In 2, the dispensability of trust when in order to recognize an external authority we need to strengthen our self-confidence. In 3, the failure of intentional strategies of quality control in order to justify our trust in institutions. 4. What allows trust to be maintained in those situations is the meliorist assumption M: somehow we continue expecting for better contents. 5. M offers good solutions to the above mentioned problems. M is very important when dealing with experts, public institutions, and social media, but not when dealing with cognitive faculties or instruments. Differences in the personal relations involved in M allow to distinguish between trust in people and trust in institutions. Furthermore, the relevant kind of deception can be easily understood as a consequence of the failure of M.
9.30 | Trust and Emotions in Polarized Communicative Contexts
Tiffany Morisseau | LaPEA University Paris Cité
People are more likely to invest cognitive efforts in assessing complex information when it is relevant to them: engaging in such an investigation is indeed only worthwhile if the stakes of knowing the truth are high enough from an individual point of view. At the same time, communication can serve other purposes than the transmission of information as such (e.g., the defence or promotion of a political stance), and in many cases the truth value of a statement is not the most important aspect for achieving these purposes. Consequently, the recognition of a practice (or of a scientific fact) in the common ground of a conversation not only depends on its moral or functional relevance, but also on what such recognition implies in terms of power relations between different social groups. In this paper, I review the literature and show how these pragmatic mechanisms can account for the reinforcement of polarization, and the key role of interpersonal trust. In particular, I examine the hypothesis that more polarized people have less confidence in others’ ability and/or willingness to truly consider the value of their position. Finally, I present the results of an experiment designed to test whether a “safe” conversational context (in which, e.g., common values and objectives are made salient) can decrease opinion polarization and make individuals more accuracy-driven. The practical consequences of these results are discussed, together with the extent to which they can translate into concrete ways to improve how people assess information on a day-to-day basis.
10.00 | The Conditions of Trust
Michael Pope | Boston College
In living socially, trust is crucial for much of what we know and do. As Annette Baier (1994) puts it, trust is like the air we breathe—something both essential and taken for granted in living, until it is polluted. Yet, consensus about how best to understand trust remains elusive. Amidst predictive, affective, and normative approaches to trust, Onora O’Neill argues that any account of trust is inseparable from evidence of a trustees’ trustworthiness—of their honesty, competence, reliability, and so on. As Baier (2013) suggests, however, one’s understanding of trustworthiness is contingent on one’s understanding of trust. To this point, I develop a pluralistic approach on which trust is a psychological disposition to rely on another for some end or goal within some domain(s). I argue that while trust’s role or function across contexts is to dispose people to rely, the conditions of trust can differ significantly. Drawing on resources from social psychology and risk analysis, I argue that conditions of value similarity, confidence, competence, and care differentially affect trust—that is, how trust is conceived within a particular domain. This unites insights from influential affective, predictive, and normative approaches to trust, while highlighting sources of persistent disagreement in cases like climate science communication and vaccine hesitancy. Moreover, the pluralistic view I develop reveals that trust itself is value-laden inasmuch as it is sensitive to what one trusts for. In other words, the way one becomes disposed to rely on another can vary across people and contexts, depending on the conditions one takes as indicative of trustworthiness. I conclude by arguing that determining the appropriate conditions of trust is an essential part of an ethics of trust.
10.30 | Trustworthiness and Vulnerability
Elizabeth Stewart | Howard University
We trust others because we cannot meet all of our own needs. We cannot be everywhere or do everything. Our finite, embodied nature renders us inherently vulnerable. So we reach out to others to do or be what we cannot. That is, we often reach out to experts. When we reach out, however, we become vulnerable in a more narrow sense – vulnerable to the actions or inactions of those to whom we reach. Vulnerability thus plays into trust relationships in two different ways. On the one hand, vulnerabilities precede trust relationships and on the other, vulnerabilities follow from specific trust relationships. Typically, when considering trustworthiness, the focus is on responding to the vulnerabilities that follow from the trust relationship rather than those that precede it. For example, in ALLEA (2018), the authors list several features needed for trustworthiness which focus on scientists’ ability to keep commitments made in response to trust. This list includes the following features: agents should acknowledge their motivations and the environment in which they are operating, they should have self-knowledge of their limitations, they should avoid making commitments that they are unable to fulfill, and they should have a good sense of what is expected of them. In this paper, I would like to add an item to this list: trustworthy agents should understand the context of vulnerability that precedes the trust placed in them. Understanding the vulnerabilities that precede trust relationships has two important consequences for experts and policy-makers. First, it helps make sense of unjustified distrust: trust is more likely to be broken when a truster’s vulnerabilities render them unable to form appropriate initial expectations of experts. Second, it motivates experts and policy-makers to understand and engage with the needs that individuals and communities have, but cannot meet on their own.
Room: SCIH 1.51
Chair: Torbjørn Gundersen | University of Oslo
9.00 | Gatekeeping Knowledge
Ísak Andri Ólafsson | University of Glasgow
There is a distinct difference between acquiring knowledge and distributing it. Employing different social norms for knowledge acquisition on one hand, and distribution on the other, enables epistemic groups to spend the appropriate resources on getting things right in each case. Groups that are comprised of experts may set up gatekeeping processes to help facilitate knowledge transmissions within the group, without diminishing the quality of the knowledge being transmitted. Unfortunately, gatekeeping can be problematic, and two challenges arise that need to be addressed. Firstly, the negative consequences of believing falsehoods are exacerbated through gatekeeping (poisoned well problem). Secondly, the likelihood of the group missing out on useful knowledge is elevated (knowledge deprivation problem). One way to try to resolve these problems is to assign more resources to the gatekeeping process. Although this approach is more costly it is still able to facilitate knowledge transmissions that would otherwise be too laborious. However, fortifying the gatekeeping process elevates the level of trust placed in the process, and if the improved gatekeeping process fails then the results could be catastrophic. For example, if misinformation about climate change gets erroneously approved by a respected gatekeeping process the potential harm would be irreparable. Even so, gatekeeping can still be beneficial in cases where a level of belief uniformity within a group is so valuable that the risk of poisoned well and knowledge deprivation becomes acceptable. These cases include the promotion of policy changes in an international setting and the conduction of large-scale scientific research. Gatekeeping processes can simplify how knowledge can effectively be curated through policies, both within groups and between groups. This kind of implementation can induce a risk of reduced individual autonomy but if the group has a reason to trust the gatekeepers, then it is not inherently problematic.
9.30 | Uncritical Trust and Unconcern for the Truth
Pablo Vera Vega | University of Laguna
There are many ways to think about trust. We can think it as directed towards an object (trust in the Government, trust in experts, etc.) or think it without that direction, focusing on the way in which that trust is given (critical trust, interested trust, etc.). I will analyze a certain kind of trust according to its modality: trust without criticism. What does it mean to trust uncritically? To isolate this concept, I use the reflections developed by Frankfurt in On Bullshit. There, bullshit is defined as the speech that hides his unconcern for the truth. Even though the phenomenon that Frankfurt analyzes is linked to linguistic interaction, it is not difficult to extend it. It is clear that uncritical trust is not a speech and it does not have to be hidden either. We focus then on the phenomenon of unconcern for the truth. This “unconcern for the truth” constitutes, as Cassam has observed, an epistemic vice; and it is not difficult to think that uncritical trust means a trust that does not evaluate the truth. However, such unconcern is not absolute. Someone can be careless about a truth because she is worrying badly about it; that is, someone can uncritically trust by trusting critically but badly. Furthermore, we not only trust truths, but also values. Therefore, unconcern means, more broadly, neglecting the truths in which we believe or in which we can come to believe and, at the same time, not correctly examining the values in which we operate. Consequently, uncritical trust is presented as the neglect of a set of rules that we must apply to ourselves. These rules are precisely those that govern the optimal consumption of information and those that optimally direct the rational action of trust.
10.00 | Regulating for Evidence-Based Expert Advice: Which Prescriptions for Non-epistemic Value Judgments?
Silje Aa. Langvatn | University of Bergen
Cathrine Holst | University of Oslo
Calls for expert advice are abundant in present-day governance, and regulators increasingly seek to regulate such advice in accordance with standards of ‘evidence-based policy-making’. Generally, evidence-based policy advice is to be based on ‘evidence’, and the definition of what constitutes proper evidence inevitably involves judgment on epistemic value, as some types of knowledge and methods will be preferred above others. Yet, the experts that are called upon in such policy advice are typically also involved in non-epistemic value considerations, for instance regarding the ethical and political assumptions of the research under review, and the ethical and political priorities of their principals or in the citizenry. Often, they will also be asked to prioritize and recommend policies when evidence is indeterminate or uncertain, or equally compatible with policies with different ethical and political underpinnings. How do regulations prepare for non-epistemic value judgments and policy choice among expert advisors, and which approach to such judgments are outlined? The paper will, first, map what regulatory documents and guidelines prescribe regarding ethical and political judgments in expert advice, zooming in on Norway, a country where the government increasingly asks experts to provide policy recommendations. Second, the paper will provide an assessment of the adequacy of these prescriptions, based on criteria derived from normative philosophical literatures on the proper political role of experts. It is argued that the prescriptions provided are in part rudimentary, in part flawed, and it is discussed how this can be, even in a sophisticated regulatory regime where expertise is held in high regard and expert advice plays a significant role. The paper connects normative philosophical argument with discussions of real-world policy-making and regulation, and has its focus on an understudied dimension of expert advice.
10.30 | Evidentialism and Nonevidentialism for a Rational Trust
Matt Jope | University of Edinburgh
On what grounds is it rational to trust someone? A plausible starting point is that rational trust is grounded in evidence: evidence of reliability, competence, trustworthiness. However, many philosophers of trust hold that trust is incompatible with rational reflection on one’s evidence; to trust is to make oneself vulnerable to the risk of betrayal, and any attempt to minimise such risk undermines the possibility of a genuine relationship of trust. How can we make sense of these seemingly contradictory thoughts? In this paper I explore different possible answers, taking Thomas Simpson’s evidentialist account of trust as a starting point. I attempt to tease out one important insight of the evidentialist framework in terms of a connection between trust and rational action: to trust someone to do something involves acting for certain reasons and such action is rational only if those reasons are supported by evidence. However, one drawback of evidentialism is that it fails to explain what is distinctive about trust that sets it apart from other cognitive attitudes such as belief. Non-evidentialism, on the other hand, is better suited to explain what is distinctive of trust but it appears unable to account for the intuitive connection between trust and action. I argue that progress can be made by distinguishing between the act of trusting and the beliefs that are the output of trust. While the former need not be supported by evidence, the latter need be. I end by considering how this account of trust handles the case of trust in experts, where it seems as though evidence is crucially important. I argue that where stakes are high, trust in experts may not be appropriate and mere reliance grounded in strong evidence may have to suffice.
Room: SCIH 0.12
Chair: Maria Baghramian | University College Dublin
9.00 | How to Feel About Climate Change?
Julia Mosquera & Kirsti Jylhä | Institute for Future Studies, Stockholm | PERITIA Prize Winners 2022
Climate change evokes different emotions in people. Recently, climate emotions have become a matter of normative scrutiny in the public debate. This phenomenon, which we refer to as the normativization of climate emotions, manifests at two levels. At the individual level, people are faced with affective dilemmas, situations where they are genuinely uncertain about what is the right way to feel in the face of climate change. At the collective level, the public debate reflects disagreement about which emotions are appropriate to feel in the climate context. The aim of this paper is to examine the normative reasons in favour of different climate emotions by combining normative criteria from philosophy and psychology, such as rationality-based and consequentialist ones. We conclude that these criteria provide partial reasons for or against different climate emotions and that the suitability of each criterion will depend on various considerations, including the specific object that the emotion is directed to. We suggest that emotional disagreement in climate contexts may generate distrust, potentially hindering cooperation for climate action. We propose that we can ease challenges like this if we come to terms with the complex nature of climate emotions and their normative justification.
9.30 | Expertise and Trespassing on the Ethical Domain
Federico Bina | University of Genova
Davide Battisti | University of Bergamo
Domain-specific experts frequently provide ethical recommendations when brokering their knowledge qua experts to policymakers and laypeople. Categorical recommendations have the form “in light of x, you/we/they should do A”, while conditional recommendations have the form “if you/we/they want B, then in light of x you/we/they should do A”. In this paper, we argue that experts should avoid unjustified categorical recommendations as well as partial conditional ones (i.e., presenting only some of the available options and ignoring conflicts between considered preferences). Specifically, we argue that this phenomenon is a peculiar and still underexplored form of epistemic trespassing. Despite sound ethical judgments require non-moral and domain-specific knowledge and skills, which experts in the domain at stake usually possess, they also require peculiar analytic competencies (Schaefer & Savulescu 2019) which domain-specific experts often lack and ignore to lack (Kruger & Dunning 1999). We claim that this phenomenon is hazardous for the following reasons. First, novices’ ignorance and vulnerability can favor misplaced trust about relevant ethical issues which require peculiar tools to be addressed (other than mere scientific knowledge or deference to those who possess it). Second, laypeople might confuse ethical disagreement among experts for disagreement about data, thus undermining trust in science and experts. Third, trust in experts can also be jeopardized since epistemic trespassing, categorical, and partial conditional recommendations are usually characterized by overconfidence (Ballantyne 2019), which can notoriously backfire. Therefore, qua advisors to policymakers and laypeople, experts should either improve their ethical reasoning skills to distinguish their moral and non-moral claims and provide independent justification for the former, or modestly avoid trespassing on the ethical domain. Finally, our analysis identifies a precise social role for ethical experts, and highlights their specific function: that of identifying, analyzing and addressing, with an impartialist conditional approach (Yoder 1998), the main ethical issues at stake in public discourse and decision-making.
10.00 | When Moral Philosophers Are Not Policy Experts
Kian Mintz-Woo | University College Cork
Since Singer (1972) asked whether moral philosophers should be entrusted with normative questions, such as in policy debates, a large literature has sprung up discussing various positions (e.g. Archard, 2011; Birnbacher, 2012; Cowley, 2005, 2011; Gesang, 2010; Singer, 1988). Suppose we take moral expertise in applied contexts to have as its components moral knowledge, non-moral knowledge, and reasoning ability. Contra Singer and some of his followers, my contention is that there are certain normative questions in policy where moral philosophers (those whose competence is primarily knowledge about moral theory) will not be best placed to play the role of moral experts because the moral theory to be applied is presupposed by the question at hand. For these questions, the expert in morality is not the policy expert—even though these are normative policy questions. For instance, when considering how risk- or inequality-averse society should be, the social welfare functions applied arise only within explicitly consequentialist frameworks. The relative importance of knowing about alternative (i.e. non-consequentialist) moral theories is lessened compared to the importance of knowing the technical facts about these frameworks. Such facts include the convergence of risk aversion with intertemporal inequality aversion for rational agents under standard utilitarian theories. This convergence constrains the judgments which are consistent to adopt together. Similarly, when weighing preferences between species, or choosing discount rates across time periods, non-consequentialist theoretical familiarity is relatively unimportant. In such domains, I claim that a domain expert is better placed to make these moral judgments than a moral expert, since the non-moral components of expertise dominate. Despite the avowedly normative nature of these policy decisions, moral theory expertise is less valuable than domain expertise. I draw out some conclusions of these claims, pointing out that, in some policy contexts, this could counterintuitively justify deferring to technocratic domain experts.
10.30 | (Counter)Expertise and Crisis Communication: Doing Trust, (Undone) Science, and Disaster
Cécile Stephanie Stehrenberger | IZWT, University of Wuppertal
This presentation examines three different dimensions of the relationship between trust, science, and disaster. It explores who talked about trusting what kind of science in the context of various 20th and 21st Century disasters, and who produced scientific knowledge as a consequence of mistrust in certain institutions. Its second part deals with the history of science of disaster related trust research, while part three investigates forms of doing trust at the level of the scientific practices of (Cold War) disaster research. Moreover, the talk reflects on the history of science studies´ interest in the topic of “trust in science”. Focusing on the ideals and practices of disaster related “crisis communication” and elaborating on its genealogy, I seek to demonstrate how trust and expertise have been produced in relation to each other and have thereby been coined by social and global relations of power. I will show how these relations also played a pivotal role in creating epistemic distrust and preventing some (racialized) bodies from becoming trustworthy figures of (toxic) truth on environmental disasters. At the same time, I will demonstrate how in the analyzed cases tactics of doing trust have been inextricably linked to articulations of doubt and suspicion and to performances of distrust.
Room: SCIH 1.12
Chair: Silvia Ivani | University College Dublin
9.00 | Trust and Expert Disagreement
Joanna Aleksandra Kreft | University of Tromsø – The Arctic University of Norway
Experts commonly disagree with one another, and their judgments often turn out to be wrong. This lack of consensus and fallibility can create impatience and distrust on the part of the public, leading to wholesale rejection of expert opinion. Michael Grove, former British Lord Chancellor, sought to exploit this distrust during the Brexit campaign: “the people of this country have had enough of experts with organisations with acronyms saying that they know what is best and getting it consistently wrong.” I shall argue that this distrust is misguided. Indeed, under ordinary circumstances, we are as a society best served by social epistemological mechanisms that practically guarantee high levels of expert disagreement and error. Using computer simulation, I show that epistemic distrust and the consequent lack of consensus creates and maintains cognitive diversity among different teams of experts considering the same matter. This diversity does not reduce the number of incorrect expert judgments, but it does make it much more likely that some team will find the scientific facts or best policies that all of them are searching for and that society will benefit as a result. Intriguingly, the relevant mechanisms include behaviors that at the individual level have been thought epistemically vicious, such as distrusting testimony from epistemic peers for no other reason than they happen to be from a different social group. At the collective level, it emerges, such behaviour can be epistemically virtuous.
9.30 | Fake Experts: A Study of Why They Were (Mis)Trusted
Ammo Joshy | High Performance Computing Center Stuttgart
Information, old or new, scientific or non-scientific, is considered credible when there is an expert(s) cited as its source. However, with social media and other networking platforms that enable sharing information, making a distinction between true experts and fake expert has become difficult. In the field of philosophy of trust and testimony, discussions about epistemic dependence on experts are prevalent, along with the questions of whom to trust and why trust an expert. Philosopher Alvin Goldman proposes lay people to seek academic credentials, past track records and to consider the number of independent experts supporting the expert, being evaluated, as strategies for identifying trustworthy expertise. According to him, acquiring access to these epistemic indicators may be a more tedious task than discerning expertise of experts. On the other hand, social indicators that inform reputation, as claimed by T.Y Branch, Gloria Origgi and Tiffany Morisseau, can serve as better tools to evaluate expertise. In this paper, an empirical study that was conducted to investigate the reasons to trust experts as part of a co-creation workshop between science and art will be discussed. During the experiment, creatives were exposed to a group of fake experts or dishonest signallers as well as genuine experts simultaneously. Their reactions and responses to the revelation of the true identity of the presumed experts that were recorded through qualitative interviews will be analysed for reputational cues such as status, epistemic authority, influence and values.
10.00 | Trusting Experts in Spite of Conflicting Testimony: An Epistemic Perspective on Expert Disagreement
Evelina Edfors | Uppsala University
When disagreeing experts seemingly are equally as knowledgeable, it is difficult to identify who, if anyone, to trust. Trying to identify trustworthy experts in situations of peer disagreement is undeniably difficult, but we need not resort to distrust. I offer a ‘know-how’ analysis of experts to help us identify trustworthy experts in spite of conflicting testimony. I argue that the most reliable and accessible indicators of trustworthy experts can be found in evidence of their know-how. Several analyses of indicators of trustworthy experts have focused on evidence of an expert’s know-that or epistemic virtues, but this is sometimes misdirected. On a useful theory on epistemic trust in experts, the justification for trust stems from evidence of expert know-how. A know-how analysis therefore provides a way forward even when other evidence of trustworthiness is inconclusive or inaccessible. This analysis turns to sources of evidence of trustworthiness which are both epistemically justified and practically feasible for laypeople to find. The analysis is targeted towards scientific expert disagreement on inter- and multidisciplinary questions, and I discuss this with an applied focus on disagreement concerning policies on climate change and medical treatments. I argue that in these situations, we can identify trustworthy experts by analysing how inter- and multidisciplinary research is best conducted. This is because research that correctly combines width and depth of expertise provides an epistemically well- justified way of getting reliable scientific results. While turning to the benefits of inter- and multidisciplinary research, we still need to navigate potential trade-offs between deep and wide expertise. This challenge also benefits from applying the lens of the proposed know- how analysis. By exploring potential results of this proposed analysis, we can responsibly navigate through conflicting expert testimony, without having to resort to distrust.
10.30 | On What It Takes to Be a Pseudo-Expert
Michel Croce | University of Genoa
Recent work on the role of expertise in public debate has lamented that we lack an account of how people mimic expertise to achieve legitimacy in the public sphere and be granted epistemic authority (Sorial 2017). Fuhrer and colleagues (2021) attempted to fill this gap by providing a conceptual analysis of pseudo-expert, namely one that distinguishes this term from other forms of non-experts, such as trespassers, pundits, con-artists, bad researchers, crackpots, and pseudo-scientists. On their account (2021: 5), someone is a pseudo-expert insofar as they (1) seek the social status typically granted to experts in a domain D; (2) mimic what experts do when they fulfil the novice-oriented function (Croce 2019), namely helping novices achieve epistemic goods in D; (3) lack the required competence to fulfil the novice-oriented function or being unwilling to do it; (4) operate in a domain in which there are people with real expertise. This paper has two main goals: the first goal is to show that, despite its overall contribution to the discussion about (the lack of) expertise, Fuhrer et al.’s account fails to provide a convincing definition of a pseudo-expert. Against (1), we show that for one to be a pseudo-expert it is not necessary that one intentionally aims at achieving the social status of an expert. Against (2), we argue that one can also be a pseudo-expert by attempting to fulfil a research-oriented function, that is, by trying to contribute to the progress of a given domain. Finally, against (3), we argue that there are problems in considering pseudo-experts people who have acquired true expertise in a domain but start spreading misinformation. The second goal, which we pursue in the constructive part of the paper, is to provide an alternative account of a pseudo-expert that avoids the problems with Fuhrer et al.’s view without losing the benefits of their contribution.
Room: SCIH 2.12
Chair: Arshak Balayan | American University of Armenia
9.00 | Scientific Disagreements in the Spotlight: How Public Controversies Affect Our Trust in Scientists
Mariangela Zoe Cocchiaro | Jagiellonian University
As laypeople, we do not generally have access to relevant scientific evidence and/or we lack the skills required for analysing it and we are often forced to rely on scientists’ public testimony – a public testimony that is either given by the scientists themselves or reported by someone who does not necessarily have scientific expertise about the relevant domain, such as journalists. Now, one salient fact about scientific experts is that they often find themselves in disagreement with their peers on theories and hypotheses across all fields. In turn, the ubiquity of disagreement among scientists has led to the ubiquity of disagreement in science communication, as not only do scientists disagree among themselves, but they are also reported in doing so. In public discourse, this fact is often taken as a reason for laypeople not to trust experts and empirical studies have provided evidence in support of this claim. In general, laypeople trust scientists less in condition of disagreement than when they are in condition of agreement. In particular, public disagreements among scientists about non-politicised issues lead to less attention to and acceptance of particular science issues as well as broader mistrust of scientists and scientific methods. The question that I ask in this talk is whether this should be the case, i.e. whether it is epistemically rational for laypeople to lower their (extant) trust in scientists upon learning of their disagreement. For these purposes, I will, firstly, outline an operational account of (public) trust in science. Secondly, I will present as a case study a recent public disagreement concerning Covid-19. Thirdly, I will investigate whether the latter undermines the former and, if so, how.
9.30 | Gaming the Numbers: Manipulating Trust with Formal Methods
Niël Conradie | RWTH Aachen University
Mathematical models play an indispensable role in understanding phenomena as diverse as pendulums, prisoner’s dilemmas, predator-prey dynamics, and pandemics. But they can also be tools for advancing an agenda by promulgating misinformation. Indeed, because formal methods lend claims an air of ‘objectivity’, they are a powerful tool for manipulation. There are plausibly many factors that explain their manipulative potential; our concern is whether there is any distinctly normative basis for this potential. As we will argue, that a study utilizes formal methods is a normative reason to believe the results of that study. And so, in a narrow and limited sense, it can be rational to believe even when the numbers have been gamed. The picture we defend is as follows. The audience for a formal study receives two signals from the formalism of the study. The Training Signal signals that the agents of the study have the relevant expertise because they underwent training, and therefore possess the requisite tacit knowledge to use the formal methods responsibly.1 The Scrutiny Signal signals that there are relevant ‘peer experts’ with overlapping tacit knowledge and facility in those formal methods, who are able to exercise quality control.2 In brief, the formalism signals expertise in two ways. This gives the audience a reason to trust that the study is the product of experts and of their expertise; and, as a result of that, they have a reason to believe the results of the study. The double-signaling power of formalism can withstand exploitation by bad actors here and there, and so gives (misleading) reasons even when the numbers have been gamed. However, this signaling power can be eroded by a general decoupling of formalism from domain-specific expertise. Time permitting, we discuss some perhaps surprising forms this erosion can take: universal higher education and artificial intelligence.
10.00 | Public Trust in Climate Science: A Systematic Meta-narrative Review
Antoinette Fage-Butler | (Mis)trust of Scientific Expertise Project Aarhus University
Loni Ledderer | (Mis)trust of Scientific Expertise Project Aarhus University
Public trust in climate science is deemed necessary for ensuring that climate-friendly behaviours and policies are adopted to avert the devastation predicted to result from further global warming. As different conceptualisations of public trust may be associated with disciplinary fields, our aim was to investigate how various disciplines have conceptualised public trust in climate science. We used Greenhalgh and colleagues’ systematic meta-narrative review methodology to investigate narratives of trust in climate science. A literature search identified 404 peer-reviewed papers on trust relating to climate science in the databases Scopus, Web of Science, EBSCO, ProQuest and Ovid MEDLINE, 58 of which were included in the final sample. We identified 10 different academic disciplines in the sample. Most papers used quantitative methods, were conducted in the United States and did not explicitly define trust. However, based on the papers’ context and operationalisation of trust, we identified six main narratives of trust: attitudinal trust, cognitive trust, affective trust, contingencies of trust, contextual trust, and communicated trust. Besides narratives, we also explored types of trust. For the ‘type of trust’ category, we found ‘trust in factor’ (relating to whom one trusts), ‘information’ (relating to the trustworthiness of information) and ‘trust in action’ (referring to trust that a party will deliver climate-friendly actions). The lack of conceptual common ground that we identified in the articles might lead to disagreement on what trust is and how trust works. This could be one (but only one) reason why mitigating concerns about trust in climate science and science in general has proven difficult.
10.30 | Expert Consensus as Probability Intervals: Transparency, Robustness and Reassurance
Lee Elkin | Erasmus University Rotterdam
It is often difficult for policymakers to navigate scientific uncertainty, especially from a practical standpoint upon finding themselves in a “damned if you do, damned if you don’t” type of situation. However, to prevent future harm arising from some activity, policymakers might intend to err on the side of caution and trend toward some version of the Precautionary Principle in decision-making. But disagreement among experts poses a challenge for policymakers in following through with such an intention, especially if there is disagreement among experts over the effectiveness of cautious policies, e.g., imposing lockdowns or not during a global pandemic. To resolve any conflict, expert judgments might be combined to form consensus opinions that policy decisions may be made under. But what is the shape of these consensus opinions? This paper takes for granted that the relevant expert judgments are probabilistic in nature, given the scientific uncertainty surrounding the matter in question. It then proposes that in promoting cautious reasoning on behalf of policymakers, the consensus opinions take the form of probability intervals based on the range of judgments given by experts, e.g., C(lockdowns are effective) = [¼, ¾]. I argue that consensus opinions as probability intervals yield the following advantages: transparency, robustness, and reassurance. Concerning transparency, probability intervals explicitly showcase the opinions of all experts, unlike alternative aggregated judgments. Concerning robustness, probability intervals can yield more reliable confirmatory assessments under Robust Bayesian Analysis. And concerning reassurance, probability intervals provide some doubt removal for policymakers regarding the expectation of a policy decision in case a generalized version of the Ex Ante Pareto Principle is satisfied. That is, if all experts expect policy A to be strictly better than policy B, the consensus opinion preserves the ordering under probability intervals.
11.00 – 11.30 | Coffee Break | UCD O’Brien Centre for Science
11.30 – 13.00 | Panels J–K: Parallel Sessions Day 3
Room: Moore Auditorium
Don Ross | University College Cork
Carlo Martini | Vita-Salute San Raffaele University
Anna Alexandrova | University of Cambridge
Room: SCIH 1.51
Viktor Valgarðsson | University of Southampton
Hannah Bunting | University of Exeter
Daniel Devine | Oxford University, Nuffield College
13.00 – 14.00 | Lunch | UCD O’Brien Centre for Science
14:00 – 15.30 | Panels L–M: Parallel Sessions Day 3
Room: Moore Auditorium
Chair: Carlo Martini | Vita-Salute San Raffaele University
Whom to Trust, What to believe?
Knowledge, Trust & Trustworthiness in Pandemic Times
Judith Simon | University of Hamburg
Trust and Vulnerability to Betrayal
Adam Carter | Cogito Research Group, University of
Glasgow
Climate Communication: How Researchers Navigate Between Scientific Truth and Media Publics
Donya Alinejad & Jose van Dijck | Utrecht University
Room: SCIH 1.51
Stephan Lewandowsky | University of Bristol
Linda Karlsson | University of Turku
Philipp Schmid | University of Erfurt
Dawn Holford | University of Bristol
15.30 – 16.00 | Coffee Break | UCD O’Brien Centre for Science
16.00 – 17.30 | Panel N
Room: Moore Auditorium
Christian Lahusen | University of Siegen | EnTrust (Enlightened Trust in Government)
Martino Maggetti | University of Lausanne | TiGRE (Trust in Governance and Regulation in Europe)
Maria Baghramian | University College Dublin | PERITIA (Policy, Expertise and Trust)
17.45 | Concluding Remarks | UCD O’Brien Centre for Science
Room: Moore Auditorium
Chair: Colin Scott | College Principal UCD College of Social Sciences and Law
Professor Onora O’Neill | University of Cambridge
Close of Conference
This conference has received support from: