Computing Reviews
Today's Issue Hot Topics Search Browse Recommended My Account Log In
Review Help
Search
Information, ethics, and computers: the problem of autonomous moral agents
Carsten Stahl B. Minds and Machines14 (1):67-83,2004.Type:Article
Date Reviewed: Jul 25 2005

Are computers able to use information to act morally and reflect ethically? This is the question Stahl seeks to address in this paper. It is cleverly structured and well conceived, but falls somewhat short of its potential in the delivery. Specifically, the author proceeds by making plausibility arguments in favor of computers as autonomous moral agents; his plausibility arguments are weak, however. At this point, readers who support the notion of anthropomorphic machines are likely to be nodding eagerly, while skeptics are probably shaking their heads. For example, the author suggests a moral Turing test for assessing the moral status of a computer. Supporters eagerly nod. Skeptics are dismayed. Then, the author, who appears to be objectively supporting the plausibility of computers as moral agents, turns on the idea with some arguments of merit. For example, moral behavior requires a contextual understanding of the meaning of a situation. Since computers are not capable of (other than simulated) contextual understanding, they are not capable of moral decision making. So the author has, in effect, set up a series of weak arguments, and then blown them away with much stronger arguments. It is a clever approach.

Unfortunately, the weakness of the paper is in the execution. A paper of this kind requires stronger writing to really pull it off. There are times when what the author is saying just sounds silly. For example, the author states, “In order to act morally according to utilitarian theory, one should do what maximizes the total utility. A computer can do this by simply functioning.” How can one respond to a line like that? In many other places, the paper is just unclear.

Nonetheless, the paper is a good piece for stimulating one’s thinking about the issue of computers as autonomous moral agents, and the ambiguities in the paper would make it a good vehicle for class discussion.

Reviewer:  J. M. Artz Review #: CR131558
Bookmark and Share
  Reviewer Selected
Featured Reviewer
 
 
Bounded-Action Devices (F.1.1 ... )
 
 
Cognitive Simulation (I.2.0 ... )
 
 
Ethics (K.4.1 ... )
 
 
General (I.2.0 )
 
 
Public Policy Issues (K.4.1 )
 
Would you recommend this review?
yes
no
Other reviews under "Bounded-Action Devices": Date
Fairness and conspiracies
Best E. Information Processing Letters 18(4): 215-220, 1984. Type: Article
May 1 1985
Separating the eraser Turing machine classes Le, NLe, co-NLe and Pe
Krause M., Meinel C., Waack S. Theoretical Computer Science 86(2): 267-275, 1991. Type: Article
Jul 1 1992
Two nonlinear lower bounds for on-line computations
Dūris P., Galil Z., Paul W. (ed), Reischuk R. Information and Control 60(1-3): 1-11, 1984. Type: Article
Aug 1 1985
more...

E-Mail This Printer-Friendly
Send Your Comments
Contact Us
Reproduction in whole or in part without permission is prohibited.   Copyright 1999-2024 ThinkLoud®
Terms of Use
| Privacy Policy