Computing Reviews

AI assisted ethics
Etzioni A., Etzioni O. Ethics and Information Technology18(2):149-156,2016.Type:Article
Date Reviewed: 08/31/16

Artificial intelligence (AI) implies that robots, machines, and instruments will make autonomous decisions, if only because they will make decisions much quicker than a human can intervene. Consider driverless cars. At least some control decisions will have a moral aspect: Should a car swerve to avoid a kitty if such an action puts passengers at risk?

The paper addresses the question of how to provide moral guidance to the AI instrument. One approach is to use local community values, the values ensconced in local law. Law, however, commonly deals with blame and intent. These can be difficult to assign in the AI context. Community values, another approach, often go beyond the law, as a parent’s responsibility to her children, but agreeing on which community values to incorporate will be difficult. Allowing each person to select her or his values, a libertarian approach, seem infeasible in practice.

The authors propose an ethics bot. The ethics bot would gather information about a person’s actions--Does she or he recycle? Search for tax breaks? Slow down for yellow lights? The bot would use AI to develop AI instruments that match the person’s behavior. The bot would use much more information in decision making than a person could reasonably be expected to gather when actually deciding. It would be patient in decision making. It would be based on behavior rather than on a person’s expressed attitudes.

Developing the ethics bot seems feasible, if complex. The paper is clear and thought provoking. I highly recommend it.

Reviewer:  B. Hazeltine Review #: CR144720 (1612-0940)

Reproduction in whole or in part without permission is prohibited.   Copyright 2024 ComputingReviews.com™
Terms of Use
| Privacy Policy