
To quote T. S. Eliot’s “The Waste Land” [1]: “HURRY UP PLEASE ITS TIME.”
Raising robots to be good: a practical foray into the art and science of machine ethics is a timely and critical rethink of life and work in the age of artificial intelligence (AI) [2,3].
The book consists of nine chapters: (1) “Introduction”; (2) “Background”; (3) “A Survey of Machine Ethics”; (4) “Why We need Moral Machines”; (5) “A Framework and Approach”; (6) “A Recipe for Morality”; (7) “Modelling Morality”; (8) “The Ethics of Machine Ethics”; and (9) “Summary and Next Steps.” It concludes with a glossary and index.
This book is both scholarly and practical. Scholars will appreciate the concise research summaries and extensive references. Of particular significance for entrepreneurs, engineers, and regulators is a redefinition of machine ethics as enabling moral agency as opposed to constraining moral behavior. Table 6.1 is a definition of the requirements and test specification for moral agency in a machine. According to Raper, these requirements are:
- (1) Must be able to make [autonomous] decisions;
- (2) Must understand its own within the given situation;
- (3) Must be motivated to act in the situation at hand;
- (4) Must be able to envisage consequent states of affairs given a particular action;
- (5) Must understand its self in relation to the given state of affairs;
- (6) Must have its own personality;
- (7) Must be free;
- (8) Must need positive social interaction; and
- (9) Must prioritize positive social interaction above everything else.
Additionally, Raper outlines what she calls “robot psychology,” that is, how machines function from a psychological perspective for humans to understand. She proposes four questions to be researched:
- (1) What are the machine’s motivations?
- (2) Does the machine have inherent desires, and if so, what are they?
- (3) How does the machine process information from its environment to make decisions?
- (4) What are the machine’s cognitive abilities?
Chapter 9 reviews the past, present, and future of machine ethics, and provides a roadmap of where we need to go: “understand how children/humans develop morally,” and four steps for a program on mortality studies:
- (1) Better understand the human moral cognitive faculty;
- (2) A more sophisticated information transfer model for moral knowledge acquisition;
- (3) Explore what assurance means in the context of AI safety; and
- (4) Investigate and develop a new regulatory framework [for] the introduction of moral machines.
Some of this work is underway [4,5].
The book concludes with a compelling closing remark: “Solving the problem of morality is not about finding a solution to what good behavior constitutes, but it is about a never-ending exploration into how we ought to live.”
However, are humans prepared for such an exploration? Watch the movie I’m not a robot [6]. Play the game Moral Machine, “a platform for gathering a human perspective on moral decisions made by machine intelligence” [7]. You be the judge.