Stanley Kubrick’s 2001: A Space Odyssey opened in 1968, during a time when a general artificial intelligence (AI), embodied in the film’s HAL 9000 computer, was thought to be possible, if not achievable, by the year in the movie title. Not everyone thought so. The most prominent challenger was Mortimer Taube, who died in 1965. Challenges to AI have continued over the decades. Some of them have been of “the emperor has no clothes” type due to the lack of success in achieving the hyped goals that followed the birth of AI as a discipline. As a result, AI entered a “winter” period, stretching from the 1970s into the 1990s, during which projects were cancelled, companies failed, and funding was cut. Nonetheless, work continued under other guises. There has been a resurgence of AI usages in more dispersed, individualized, pervasive, and subtle forms.
This book is a general critique of AI as it exists today from the viewpoint of the social sciences and humanities, instead of from within the science and engineering community. It continues the critical task started by Taube. All of the contributors except one come from the social sciences and humanities.
The volume starts with a long essay by the editor on the history of AI criticism and an overview of the book’s content. This is required reading, especially if the reader is not a member of the social sciences or humanities communities. The remainder of the book contains 11 chapters collected into five parts: “Posthumanism,” “Human Values,” “Media and Language,” “Governance,” and “Resistance.” Each part has two chapters, except “Governance” which has three.
Posthumanism is a response to the Enlightenment idea of what it means to be human. René Descartes, in his Discourse on method of 1637, stated, “I think, therefore I am.” The humanism of the Enlightenment encapsulates the human experience in the process of cogitation. Hence, AI is criticized as an endorsement of the idea of a computationally based entity evolving from and transcending human mental processes. One way of looking at this idea is that the human race is creating its own evolutionary successor on which responsibility for life in general can be laid. In another way, the human race is creating its own god, an idol that will take mastery. What are the tradeoffs between power, security, and responsibility? In a practical sense, this means the transfer of control and responsibility to an electronic process. Can’t evaluate job applicants? Run a software package to collect, evaluate, and select applicants. People drive irresponsibly? Driverless automobiles will fix this. Can’t prepare a shopping list? Your refrigerator will generate it for you, order the items, pay for them out of your bank account, and have them delivered to your door. This thread of criticism stitches together the five parts of the book.
Another thread that connects all of the parts is the question of inclusiveness and diversity, especially as it affects those who are not white males. Although some readers may consider this criticism as a polemic, there is ample evidence and argument presented that the net results of the applications of AI can be deleterious to women and people of color. It is interesting to note that the use of AI in commercial devices seems to default to a feminization of the tasks the devices perform. My Amazon Alexa has a female voice, my Global Positioning System (GPS) device has a female voice, and robotic devices like Roomba perform maid services (a role traditionally dominated by female workers). There are even robots (appearing female) that can act as desk clerks in hotels. These may seem like trivial examples, but they are models of replacing female workers with robots. A more serious aspect of this critique is the use of AI in screening job applicants. The applicant has no idea of the preferences and prejudices applied by the software analyzing applicants’ application forms and resumes.
Natural language processing (NLP) and translation is an area in which accurate use of gender can be a pitfall. Gender in English appears to be natural--male creatures and female creatures are clearly differentiated with everything else neuter. Other languages are different. In German, der Hund is a dog (masculine gender grammatically) while die Katze is a cat (feminine gender grammatically). It gets worse when there are two words for the same profession. In English, a physician is a “doctor.” In German, der Arzt is a male physician and die Ärztin is a female physician. Translation between English and German should be able to distinguish between the two forms gracefully when specific individuals are discussed, for example, Dr. John Smith MD versus Dr. Jane Smith MD.
No critique would be complete without several chapters on specific applications. The part on governance discusses three areas that are in different stages of implementation. Nanotechnology and AI for biomedical applications is not here yet. However, the use of cellphone location data has been used for contact tracing in India. This has opened up a serious discussion of privacy issues. The Chinese government has created an AI court with an AI judge that evaluates evidence and renders decisions. This is truly scary.
All of the chapters are well written and the editorial process was clearly very good. The contributors were apprised of what the others were writing and they refer to each other in their essays. This is unusual. I found four essays particularly noteworthy: Engström’s on the infusion of AI into apps, Schwartz’s on the dehumanization of responsibility, Huxor’s on AI technologies as smart media, and Garvey’s manifesto at the end of the book. If you are limited on time, these core essays are must-reads. I found this book interesting because it disclosed to me what others outside the computational science community see.