Misinformation and trustworthiness: Frenemies in the analysis of public scientific controversies

Author

Dan Hicks

Published

September 8, 2023

Lewandowsky et al. (2022) examine public scientific controversies through two contrasting “lenses” or analytical frameworks, using Covid-19 as their primary case study. The first lens, “science denial,” is pretty explicitly scientistic: in cases such as “the link between AIDS and HIV, climate change, evolution, and other clearly established scientific facts,”

absent new evidence, dissent from the scientifically accepted position cannot be supported by legitimate evidence and theorizing but must necessarily—that is, in virtually all instances—involve misleading or flawed argumentation. (Lewandowsky et al. 2022, 31)

There are some qualifiers in this particular quotation — “absent new evidence” and “virtually all instances” — but in other places these are dropped: “The rules of scientific evidence formation and argumentation are inescapable and cannot be discarded or side-stepped for political expediency” (Lewandowsky et al. 2022, 32) This lens explains public scientific controversies by appealing to a combination of irrationality and disinformation/propaganda.

Lewandowsky et al. (2022) recognize the technocratic implications of the “science denial” lens:

this insistence on quality of argumentation may seemingly curtail the public’s involvement in any scientifically informed debate. After all, members of the public are often nonexperts on topics and issues whose outcomes affect their lives. (Lewandowsky et al. 2022, 32)

Their first “solution” to this problem is to communicate “scientific issues” using “stories or pictures” (Lewandowsky et al. 2022, 32). The second “solution” is a segue into the second lens.


The “trust” lens explains controversies by appealing to the ways that “people differ in how much trust they put on various information sources (e.g., scientists vs. their neighbor on social media)” (Lewandowsky et al. 2022, 33). Further, the lens recognizes that differences in trust can be due to differences in trustworthiness.

Ethnic minorities, for example, have historically been discriminated against in the health care system. Western countries, especially those with colonial histo- ries, have also damaged people’s trust in medical treatments through their previ- ous mistreatment of indigenous populations (Lowes and Montero 2021) and misuse of vaccination centers, for example, by the CIA in its hunt for Osama bin Laden (Reardon 2011). It is unsurprising that people would question scientific evidence communicated by the same institutions that caused them harm or deceived them in the past (Jamison, Quinn, and Freimuth 2019). (Lewandowsky et al. 2022, 33, my emphasis)

To elaborate this a little more, it might be helpful to make a three-way distinction, between (a) (occurrent) trust, (b) perceived trustworthiness, and (c) actual trustworthiness. A potential trustee might satisfy criteria for trustworthiness, but be incorrectly perceived to be untrustworthy by the potential trustor. For example, under the influence of politicization campaigns, many US conservatives might incorrectly believe climate scientists to be more interested in their own careers than the public interest, and thus incorrectly perceive climate scientists to be untrustworthy.

The “trust” lens implies that “appreciation of why evidence is mistrusted in these communities is essential” and the importance of “regain[ing] trust rather than dismiss[ing] beliefs based on lived experience as simple denialism” (Lewandowsky et al. 2022, 33).

Lewandowsky et al. (2022) attempt to bring these two lenses together. Understanding why a group’s “cultural background or lived experience” undermines the trustworthiness of mainstream institutions “can provide pointers as to why [they] engage[] in (or fall[] for)” irrationality and disinformation (Lewandowsky et al. 2022, 34). In addition (or perhaps “specifically”), understanding the causes of trustworthiness failures “can reveal shortcomings in the scientific process or evidence base …. analysis of those arguments can provide valuable pointers to underlying issues—such as lacking representation in medical research—that can be addressed by suitable policies or remedial research” (Lewandowsky et al. 2022, 34).

I would add that the example of incorrectly perceiving climate scientists to be untrustworthy suggests that perceived untrustworthiness and misinformation can be co-producing. Climate propagandists have not only promulgated “first order” misinformation about climate change (“it’s all natural variation”) but also “second order” misinformation about climate scientists (“they’ll say whatever it takes to get published”).


Unfortunately, in their final section, “Recommendations,” Lewandowsky et al. (2022) seem to revert back to the “science denial” lens alone. Their two primary recommendations are that “misleading and inappropriate argumentation must be identified” and that “when misleading arguments have been identified, they can be used to ‘inoculate’ the public against their ill effects” (Lewandowsky et al. 2022, 35). The “trust” lens’ emphasis on understanding why scientific institutions might not be (perceived to be) trustworthy has all but disappeared. The “trust” lens does get one paragraph, arguing that “Policies that take into account the reasons underlying misleading arguments can be more effective than those agnostic about these reasons” and that emphasizing “‘winning’ an argument” is unlikely to be successful” (Lewandowsky et al. 2022, 35). But the final sentence asserts that deliberation “can be achieved” “only when misleading arguments can be identified and rejected” (Lewandowsky et al. 2022, 35), that is, only when “science” wins the argument.


I really like the way this piece contrasts two common frameworks for understanding public scientific controversies. The scientistic and technocratic implications of the “science denial”/misinformation framework are on full display, which is useful for illustrating why I typically dislike this framework and find it, at best, incomplete. And the attempt to synthesize the two frameworks is useful: as you can see, it prompted me to think about how I understand the role of propaganda and misinformation in my analysis of controversies.

But that “Recommendations” section. To me, the paper reads like there were two sets of authors here. One set working with the misinfo framework, the other with the trustworthiness framework. The trustworthiness folks made a reasonable effort to integrate trustworthiness and misinformation in their section. But the misinfo folks ignored this, relying just on their framework to write the concluding section.

References

Lewandowsky, Stephan, Konstantinos Armaos, Hendrik Bruns, Philipp Schmid, Dawn Liu Holford, Ulrike Hahn, Ahmed Al-Rawi, Sunita Sah, and John Cook. 2022. “When Science Becomes Embroiled in Conflict: Recognizing the Public’s Need for Debate While Combating Conspiracies and Misinformation.” The ANNALS of the American Academy of Political and Social Science 700 (1): 26–40. https://doi.org/10.1177/00027162221084663.