Virtue Ethics for Robots

automated
creativity
data
rights
robot
Author

Dan Hicks

Published

June 18, 2014

Autonomous automated systems — machines capable of acting on their own for extended periods of time in complex environments — have been a major trope of science fiction for as long as the genre has existed. Over the last ten years or so, autonomous systems research has advanced dramatically, and it is generally recognized that autonomous automated systems will be common in warfare and everyday life (at least in wealthy countries) within the next ten years or so.

In light of these developments, “robot ethics” has developed as a serious scholarly (and popular) topic. Dipping into this literature, I get the impression that much of robot ethics takes what I’m going to call a principle-based approach to ethics. On such an approach, ethical judgment is a matter of, first, identifying the correct set of ethical principles, and second, correctly applying those principles in a given situation. (I have some thoughts on why robot ethics has taken the principle-based approach, but I’ll leave them out here. I’d be happy to elaborate in the comments.)

In this post, I’m going to argue that a principle-based approach tends to overlook two important features of ethical judgments in complex situations. In addition, I think that these two features correspond to certain worries that many members of the public have about automated ethical decisionmaking — that is, about turning over responsibility for ethical decisions to autonomous automated systems. So, insofar as robot ethics want to (try to) assuage public concerns about autonomous automated systems, they should at least broaden their range of ethical approaches.

Before going further, I should make it clear that I’m not actually very familiar with the current state of robot ethics. I’ve glanced at a few things, and read some things shared (and in a couple of cases, written) by my advisor, Don Howard, over social media, and I’ve had a few discussions with my friend and co-author Charles Pence. But I am, professionally speaking, a philosopher with expertise in ethics, and I generally work within an approach to ethics that is sharply critical of the limitations of a principle-based approach. So hopefully my arguments here will be valuable to robot ethicists even if they are built on a hasty generalization.

My argument is going to contrast principle-based approaches to ethics with an alternative approach called virtue ethics. Within philosophical work on ethics, principle-based approaches — especially Kantian deontology, utilitarianism, and appeals to human rights — have been dominant for the last several centuries. Virtue ethics has been recognized as a distinct approach for about 40 or 50 years, but virtue ethicists trace their approach back to such ancient philosophers as Aristotle and Confucius, as well as traditional ethe (ethos, plural) of agrarian and nomadic hunting societies in Africa and the Americas.

The principle-based approaches generally focus on decisions about particular actions, and aim to find the one right or best course of action to take in a given situation. By contrast, virtue ethics generally focuses on the character of agents over time. Since they’re less concerned about what to do in particular situations, virtue ethicists tend to be more attentive to complexity and uncertainty in decisionmaking than principle-based approaches. In addition, virtue ethicists tend to be concerned about more aspects of our ethical lives than just making decisions. Consequently, virtue ethicists have developed the conceptual tools that I’ll use below, and it’s hard to find room for these tools in the principle-based approaches. Thus, again, robot ethicists should expand their approaches, so that they can make use of these virtue ethics tools.

To make my discussion concrete, let’s consider two cases, both of which seem to be common in robot ethics.

An autonomous, automated soldier/weapons system is sent on a mission to engage with an enemy squad. The squad has barricaded themselves in a small building with a dozen children. Should the system engage the enemy — risking the lives of the children — or maintain its distance — risking the possibility that the enemy squad will escape?

An autonomous, automated car is transporting one passenger along a narrow street with parked cars on either side. A child suddenly runs out into the street from between two parked cars. The car does not have time to brake safely. Should it veer into the cars on the side of the street — risking the life of its passenger — or hit the child — risking its life?

On the principle-based approach, the task for robot ethics is to identify the correct ethical principles to be applied in these cases, and so to determine the ethically correct course of action for the automated system.

However, I argue that there is no ethically correct course of action in either of these cases. Both cases are examples of moral dilemmas: situations in which every available course of action is, in some respect or another, seriously bad, wrong, or vicious. In the automated car case, both of the available options risk serious harm to moral innocents (people who do not deserve to be harmed). In the automated weapons system case, the choice is between serious harm to moral innocents now or serious harm later (after the enemy squad escapes).

Principle-based approaches have trouble recognizing moral dilemmas to the extent that they are based on a single fundamental ethical principle, such as Kant’s categorical imperative or utilitarianism’s principle of greatest happiness. On pain of contradiction, the fundamental ethical principle cannot imply both the system should do X and the system should not do X. Pluralist principle-based approaches — like Beauchamp and Childress’ principlism for biomedical ethics — do better here. But the impression I have is that writers in robot ethics generally do not recognize the moral dilemmas is these cases; instead, they keep searching for (and arguing about) “the” right answer.

Bernard Williams, Rosalind Hursthouse, and Lisa Tessman — three major virtue ethicists — all emphasize regret as an appropriate response to finding oneself in a moral dilemma. Consider an ordinary human being, rather than an autonomous system, in either of our example cases. We expect that this person, whatever they do, will feel ethically responsible for bringing about a serious harm; they will feel regret. The fact that they have done the best that they could do in terrible, regretful circumstances could mean that we will not punish them for, say, the death of the children. But they will still feel responsible. Furthermore, this feeling of regret is an ethically appropriate response to a moral dilemma. Someone who does not feel bad in any way at all about the death of the children is callous, morally reprehensible, and a candidate for antisocial personality disorder according to the DSM IV criteria.

This brings us to the first concern that, I think, many people have about autonomous ethical decisionmaking. In the popular imagination, robots are cold, calculating, and callous; they simply “follow their programming” wherever it leads them. In other words, autonomous systems do not experience regret when confronted with moral dilemmas, and so respond to these situations in disturbing and ethically inappropriate ways. In this respect, concerns about automated ethical decisionmaking parallel Cold War-era concerns about scientific-military technocrats — Dr Strangelove and HAL are both morally reprehensible and disturbing characters because of their callous, calculating decisionmaking processes.

I presented our two example cases dichotomously: the autonomous automated system must choose exactly one of two options, A or B. But of course real-world cases aren’t that simple; there are many different courses of action that could be taken, and many different options within each possible course. In the case of the automated soldier/weapons system, the system could rush into the building, relying on its superhuman reaction times to take out the enemy before any of the children are hurt; it could use smoke or some other means to drive everyone out of the building; or it could call in automated or human support.

Principle-based approaches generally assume that the range of possible course of action is fixed and known in advance. The principles are used to evaluate each possible course, and the correct course of action is the one that comes out highest on this evaluation. Even Beauchamp and Childress’ approach seems to assume that this is how ethical decisionmaking works. But the observation in the last paragraph indicates that, in general, we don’t know the range of possible courses of action in advance.

Indeed, in ethically complex and challenging situations, there is a crucial role for creativity and innovation. Sometimes we can appear to be in a moral dilemma because of serious problems with all of the simple, obvious options. But, by taking advantage of particular features of the situation in creative ways, we can find a way to avoid the dilemma. Because principle-based approaches generally assume that all of the possible courses of action are known in advance, they generally miss this role for creativity.

I want to stress that this role for creativity is closely tied to the particular features of the situation. We might call this tactical creativity, to distinguish it from the strategic creativity that can develop novel approaches to general situations. Both strategic and tactical creativity are invaluable resources for resolving ethical dilemmas and novel situations; but tactical creativity, unlike strategic creativity, can only be exercised “on the ground,” in the particular situation at hand. Consequently, correctly resolving some apparent moral dilemmas requires tactical creativity.

This brings us to the second concern about automated ethical decisionmaking. Robots, in the popular imagination, are uncreative and unadaptable; as with the first concern, they simply “follow their programming,” rigidly and without deviation. In other words, while their designers might exercise strategic creativity, autonomous automated systems are incapable of tactical creativity. But, as I argued above, tactical creativity is required to resolve some apparent moral dilemmas. Thus, autonomous automated systems are more likely than humans to incorrectly resolve some apparent moral dilemmas. Because these systems are rigid and uncreative, they are incapable of seeing the creative way out of the dilemma. And so they are more likely than creative humans to do unnecessary ethical damage.

In this post, I’ve raised two concerns for robot ethicists, drawing on work in virtue ethics. First, because autonomous systems do not experience regret, they do not respond appropriately to finding themselves in moral dilemmas. And second, because these systems lack creativity, they are incapable of correctly resolving some moral dilemmas.

My arguments for these concerns draw on the popular image of robots and other autonomous automated systems as cold, callous, and calculating. Some readers might think that these concerns can be responded to by changing the popular image of robots — by encouraging people to think of Data from Star Trek or the robot in Robot and Frank, rather than HAL. But this would be superficial. What’s necessary, I think, is to think of ethical deliberation by autonomous automated systems not in terms of applying a set of principles, but instead as having and exercising a set of responsive capacities, including regret and creativity. This might require a radical change in the way roboticists approach system design, but (if successful) it would produce robots with genuinely virtuous character.

Reuse