Problems with Trolley Problems

reasoning
Author

Dan Hicks

Published

December 27, 2017

Oversimplified

  • There is an enumerable set of available courses of action
  • This set of available courses of action is fixed
  • The consequences of any given course of action arise immediately
  • The consequences of each course of action can (apparently) be predicted with certainty
  • The value (goodness or badness) of any given consequence can (apparently) be determined with certainty
  • The value of any given consequence is (apparently) objective
  • For any pair of consequences, their values are (apparently) commensurable
  • There is a single agent involved
  • The agent is responsible for making exactly one choice among the available courses of action
  • This choice cannot be modified as consequences develop or more information is received
  • (In the standard scenarios, the agent has no or little time to deliberate; at the same time, the scenario is presented as an object for extended deliberation)
  • (In the standard scenarios, there is no specific relationship between the agent and potential crash victims)

Misleading

  • (In the standard scenarios, the two options appear to map on to utilitarianism and Kantianism, reinforcing the idea that these two theories are mutually exclusive and jointly exhaustive)
  • Trolley problem scenarios (as distinct from accidental collisions) are likely to be rare (even exceedingly rare); while focus on them might give people the impression that they are likely to be frequent
    • Compare: we aren’t really worrying in the same way about what happens if the car is struck by lightning and the AI starts to behave erratically, or how it will communicate with drivers of non-autonomous cars, cyclists, and pedestrians

Technocratic

  • Focus on individual actions, not character or systems and institutions
  • Fits within, and so reinforces, and instrumentalist model of engineering practice. This model might itself fit within, and so reinforce, a broader isolationist model of engineering practice.

Infeasible

  • Trolley problem scenarios are non-monotonic: adding additional details can change the valence of the consequences

  • The space of trolley problem scenarios is non-surveyable. Designers cannot feasibly survey all (or even a large, “realistic” subset) of these problems in order to develop ex ante guidelines for autonomous vehicles’ behavior.

  • Can autonomous cars reliably determine the number of pedestrians they might hit? Which ones are children? etc. Put positively, what information does an autonomous car actually have about its environment?

  • Do autonomous cars deliberate like human beings? In line with either utilitarian or deontological models of practical reasoning?