Are computers able to use information to act morally and reflect ethically? This is the question Stahl seeks to address in this paper. It is cleverly structured and well conceived, but falls somewhat short of its potential in the delivery. Specifically, the author proceeds by making plausibility arguments in favor of computers as autonomous moral agents; his plausibility arguments are weak, however. At this point, readers who support the notion of anthropomorphic machines are likely to be nodding eagerly, while skeptics are probably shaking their heads. For example, the author suggests a moral Turing test for assessing the moral status of a computer. Supporters eagerly nod. Skeptics are dismayed. Then, the author, who appears to be objectively supporting the plausibility of computers as moral agents, turns on the idea with some arguments of merit. For example, moral behavior requires a contextual understanding of the meaning of a situation. Since computers are not capable of (other than simulated) contextual understanding, they are not capable of moral decision making. So the author has, in effect, set up a series of weak arguments, and then blown them away with much stronger arguments. It is a clever approach.
Unfortunately, the weakness of the paper is in the execution. A paper of this kind requires stronger writing to really pull it off. There are times when what the author is saying just sounds silly. For example, the author states, “In order to act morally according to utilitarian theory, one should do what maximizes the total utility. A computer can do this by simply functioning.” How can one respond to a line like that? In many other places, the paper is just unclear.
Nonetheless, the paper is a good piece for stimulating one’s thinking about the issue of computers as autonomous moral agents, and the ambiguities in the paper would make it a good vehicle for class discussion.