Teaching ethics of expertise in Critical Thinking

Author

Dan Hicks

Published

September 25, 2023

I’m teaching expertise and appeals to authority in Critical Thinking about a month, and start to wrestle yet again with how to approach this.

The textbook I use — like pretty much all of them? — takes the “believe the impartial sources who have the right credentials” approach. Completely out of touch with five decades of feminist philosophy and STS.

I want to start by talking about multiple sources of expertise (credentials, but also lived experience), then Baier’s conception of trust and subsequent work on the ethics+epistemology of trust. Using Baier’s conception suggests two relatively clear criteria for a trustworthy expert:

competence
On the topic in question, the expert is likely to have true beliefs.
responsiveness
“[T]hey have encapsulated my interests: the trusted party pursues or protects particular interests, at least in part, because they are the trusted party’s interests …. the expectation that the [expert] will be directly and favorably moved by the thought that we are counting on her” (Almassi 2022, 578)

From here we can talk about why different groups of people can reasonably assess the trustworthiness of experts differently. And Almassi (2022) has a really nice argument that “owning the libs” — communicating that one is actively hostile towards an outgroup — isn’t actually evidence that they’re interested in promoting the ingroup’s interests, and thus does not make them more trustworthy for members of the ingroup. Which will nicely fit into our discussion, over the subsequent two weeks, of the SIFT method for assessing online information sources and then whether ChatGPT is trustworthy (spoiler alert: it is not).

So class time looks good. Where I’m feeling uncertainty is in assessment.

In this class, the discrete course units are assessed using a weekly quiz. The quiz has a simple format: students are given an argument as a paragraph of prose, and then have to answer a series of short response questions that walk them through analyzing the argument. The short response questions are explicitly tied to “rules” that the textbook gives for assessing that type of argument. In theory, this direct line between assessments and course content will help students study. For example, last year’s course used this question to get at competence:

What is the source’s area of expertise? Is this credentialed or experiential expertise? (Rule 14)

Expectations: 1 sentence characterizing area and source of expertise.

A typical prompt/argument to analyze for this unit goes like this:

Massimiliano Vasile, an aerospace engineer at the University of Glasgow, spent two years comparing nine different technologies that could be used if an asteroid were on a collision course with Earth. Dr. Vasile’s study revealed that it would be a bad idea to blow up an incoming asteroid with nuclear weapons. Thus, blowing up Earth-bound asteroids with nuclear weapons is a bad idea. (Adapted from: Lia Miller, “The Best Way to Deflect an Asteroid,” New York Times Magazine, December 9, 2007, <http://www.nytimes.com/2007/12/09/magazine/ 09_5_asteroid.html>)

Prompts like this contain no useful information for assessing responsiveness. Examples of appeal to expertise like this “seem to treat our communities of inquirers rather generically …. experts’ particular relationship to us … is elided, treated as irrelevant to the question at hand” (Almassi 2022, 578).

Some prompts do give negative information, indications that the expert is not responsive to our interests, in the form of indicators that the source has a financial conflict of interest, “four out of five doctors smoke Camels.” That’s something, but I don’t want to reduce responsiveness to a question about conflicts of interest.

I need 7 of these prompts (there’s a new quiz each week, so students can re-take them until they pass). Scrapping them all — or almost all of them — and writing whole new ones would be a lot of work. If I have to go actively looking, it can take 60-90 minutes to find a good example — usually from a piece of science journalism — and distill it into a single, undergrad-accessible paragraph.

My dilemma is this: Stick with an inadequate approach to expertise, or put in an extra 7-10 hours of work to rewrite the quizzes?

References

Almassi, Ben. 2022. “Relationally Responsive Expert Trustworthiness.” Social Epistemology 36 (5): 576–85. https://doi.org/10.1080/02691728.2022.2103475.