Computing Reviews
Today's Issue Hot Topics Search Browse Recommended My Account Log In
Home Topics Titles Quotes Blog Featured Help
Search

Cover Quote: March 2019

For this [human-machine] cooperation to work safely and beneficially for both humans and machines, artificial agents should follow moral values and ethical principles (appropriate to where they will act), as well as safety constraints. When directed to achieve a set of goals, agents should ensure that their actions do not violate these principles and values overtly, or through negligence by performing risky actions. It would be easier for humans to accept and trust machines who behave as ethically as we do, and these principles would make it easier for artificial agents to determine their actions and explain their behavior in terms understandable by humans. Moreover, if machines and humans needed to make decisions together, shared moral values and ethical principles would facilitate consensus and compromise. Imagine a room full of physicians trying to decide on the best treatment for a patient with a difficult case. Now add an artificial agent that has read everything that has been written about the patient’s disease and similar cases, and thus can help the physicians compare the options and make a much more informed choice. To be trustworthy, the agent should care about the same values as the physicians: curing the disease should not [come at the] detriment of the patient’s well-being.

- Francesca Rossi
"How do you teach a machine to be moral?", 2015
Send Your Comments
Contact Us
Reproduction in whole or in part without permission is prohibited.   Copyright 1999-2024 ThinkLoud®
Terms of Use
| Privacy Policy