Computing Reviews
Today's Issue Hot Topics Search Browse Recommended My Account Log In
Review Help
Humble AI
Knowles B., D’Cruz J., Richards J., Varshney K. Communications of the ACM66 (9):73-79,2023.Type:Article
Date Reviewed: Nov 8 2023

While readers of Computing Reviews are more aware than the general population when it comes to whether artificial intelligence (AI) is a magical panacea or the probability of a general intelligence that will develop thinking capabilities and make decisions on its own, we are actually aware of AI’s greatest strengths: finding patterns, probably hidden to the naked eye, and arriving at inferences based on said patterns. That ability has made AI-based systems a tool of choice to make statistical predictions and estimations, learning from enormous datasets, about human behavior.

AI tools are often applied to risk assessment for financial operations. This article argues, however, that the outcome of a negative AI-based evaluation, particularly if it leads to a false negative--that is, if it predicts that a credit will be defaulted instead of paid, or that an individual will not honor their word or even their payment intentions--can have devastating consequences for an individual, or even a family, and is not to be taken lightly. The authors thus make an argument for “humble AI.” They show many situations where AI-based decisions fall in gray areas and thus a more humane touch might be needed, and suggest several ways of setting thresholds so that AI-based ranking systems are ultimately ranked by humans instead of being directly used for decision making.

The article also digs into the issue of how the public appreciates AI-based decision tools. The authors mention how society has been exposed to a big corpus of science fiction where AI systems become self-aware and dominate humanity. Even if the main argument is not about them, there is quite a bit of space devoted to understanding the impact of false negatives, leading to public distrust of AI.

Quoting the article’s conclusions, “Humble trust does not imply trusting indiscriminately. Rather, it calls for developers and deployers of AI systems to be responsive to the effects of misplaced distrust and to manifest epistemic humility about the causes of human behavior.” The presented thesis sounds completely correct--although it somewhat counters the human resources (HR) savings touted by AI proponents as the main driver behind the adoption of AI systems, so in a sense the argument “bites its own tail.” Hopefully, it will get some people to think and act more humanely.

Reviewer:  Gunnar Wolf Review #: CR147662
Bookmark and Share
  Featured Reviewer  
Ethics (K.4.1 ... )
General (I.2.0 )
Artificial Intelligence (I.2 )
Would you recommend this review?
Other reviews under "Ethics": Date
Making ethical decisions
Kreie J., Cronan T. Communications of the ACM 43(12): 66-71, 2000. Type: Article
Apr 1 2001
Ethical and social issues in the information age
Kizza J., Springer-Verlag New York, Inc., Secaucus, NJ, 2002.  232, Type: Book (9780387954219)
May 6 2003
The error of futurism: prediction and computer ethics
Horner D. ACM SIGCAS Computers and Society 32(7): 42004. Type: Article
Apr 30 2004

E-Mail This Printer-Friendly
Send Your Comments
Contact Us
Reproduction in whole or in part without permission is prohibited.   Copyright 1999-2024 ThinkLoud®
Terms of Use
| Privacy Policy