Computing Reviews
Today's Issue Hot Topics Search Browse Recommended My Account Log In
Review Help
Search
Artificial intelligence and its discontents: critiques from the social sciences and humanities
Hanemaayer A., Palgrave Macmillan, New York, NY, 2022. 290 pp. Type: Book (978-3-030886-14-1)
Date Reviewed: Oct 19 2022

Artificial intelligence (AI) is an ambiguous term that spans a broad range of topics, from narrow-focused neural nets that recognize patterns over a limited scope to the idea of a human-level, or higher, thinking capability. This computing field began in the 1950s, but was limited by available computing power at the time. Today’s computer technology provides platforms where the high-end vision of AI becomes increasingly seen as possible.

The rise in power of AI systems has led to several concerns. Current issues with deep learning pattern matching systems include: an AI’s output could reflect bias created by the data used to train the system; an automated decision-making system may lack the human judgment needed for optimal outcomes; or people using a system might not know AI is subtly directing their path to an objective.

A future issue is the possibility that humans could give away their decision-making autonomy to AI, or that humans become overly dependent on AI and use their inherent brainpower less. This raises the question of who bears responsibility for adverse events, such as a person acting based on the output of an AI system or an autonomously driven vehicle involved in a crash or striking a pedestrian. Another aspect related to the use of AI in robots is concerns about machines displacing human workers. There are ethical issues such as what if an AI begins to act as a conscious being, that is, it responds to inputs much as a human would. How should we humans treat this situation ethically and legally?

All of these concerns are valid and should lead to a critique of the developing AI industry. AI and its discontents explores the growing potential for human interactions with AI from a number of perspectives, including ethical, legal, and medical. In pursuing such a critique, one must consider that we currently have limited knowledge of how AI systems produce the results we see. In the medical realm, practitioners must consider how AI systems such as future nanorobotics will interface with human hosts to operate safely. More generally, we have no way to objectively determine if an AI system is “conscious,” because we have yet to understand consciousness in human terms.

Machine translation (MT) between languages is an important application of AI systems. It is a difficult area given the many differences of expression from one language to another, even within a given language. An area of concern is the introduction of biases of various types as MT systems are trained, typically, with data taken from the Internet. One could argue that the treatment here is overly broad. Stating a phrase is biased is often a subjective assessment. A system designer’s algorithm to assess bias draws a fine line between protecting someone’s sensibilities versus censoring free speech or declaring a statement “hate speech.” Should an algorithm decide if a statement is offensive? Should any third party make that decision for an individual?

The book isn’t flawless. A chapter from India about a COVID-19 tracking application mentions its use of AI, but primarily discusses the dangers of governmental data collection as an invasion of personal privacy while making no mention of what specific dangers AI might introduce. A few of the chapters use Marxist philosophy as a basis for critique. As repeatedly demonstrated beginning in the early 20th century, the use of Marx’s nihilistic and questionable ideas have led to human death and misery. Marxist rhetoric fails to convince. A paper that looks at AI from a Black feminism perspective mostly discusses historical racism in the US. It presents evidence that some AI systems may be biased, such as facial recognition being less accurate with certain races, but then infers that this problem may be due to the software developers themselves or some type of “algorithmic bias.” One can speculate about the honesty of the developers. It’s more likely that bias would be based on training data rather than the recognition algorithm; however, either is a fixable problem. A more convincing paper describes how AI guides users of social media platforms through suggestions of what to peruse next. This could be a search for entertainment or, more importantly, data about political candidates where certain information is provided or withheld. This is of concern because studies have demonstrated how such biases can influence an individual’s voting or other choices.

All in all, the book offers a number of useful critiques of AI as it interacts with increasing numbers of Internet users. Readers should take the Marxist perspectives with a grain of salt. Part of the problem is that viewing the world through a Marxist lens omits the moral imperative of capitalism described by Adam Smith: the true essence of capitalism requires both sides of a transaction to approach it with honesty and goodwill. This essence applies to AI as well as any other human-to-human interaction.

Reviewer:  G. R. Mayforth Review #: CR147505 (2302-0021)
Bookmark and Share
 
Ethics (K.4.1 ... )
 
 
Applications And Expert Systems (I.2.1 )
 
 
General (I.2.0 )
 
 
Arts And Humanities (J.5 )
 
 
Social And Behavioral Sciences (J.4 )
 
Would you recommend this review?
yes
no
Other reviews under "Ethics": Date
Making ethical decisions
Kreie J., Cronan T. Communications of the ACM 43(12): 66-71, 2000. Type: Article
Apr 1 2001
Ethical and social issues in the information age
Kizza J., Springer-Verlag New York, Inc., Secaucus, NJ, 2002.  232, Type: Book (9780387954219)
May 6 2003
The error of futurism: prediction and computer ethics
Horner D. ACM SIGCAS Computers and Society 32(7): 42004. Type: Article
Apr 30 2004
more...

E-Mail This Printer-Friendly
Send Your Comments
Contact Us
Reproduction in whole or in part without permission is prohibited.   Copyright 1999-2024 ThinkLoud®
Terms of Use
| Privacy Policy