Computing Reviews
Today's Issue Hot Topics Search Browse Recommended My Account Log In
Review Help
Search
DARPA’s explainable artificial intelligence (XAI) program
Gunning D.  IUI 2019 (Proceedings of the 24th International Conference on Intelligent User Interfaces, Marina del Ray, California, Mar 17-20, 2019)ii-ii.2019.Type:Proceedings
Date Reviewed: Jul 21 2021

This very interesting survey talk provides extensive, high-level insight into the midterm progress of this advanced research endeavor.

David Gunning first presents motivation for establishing the four-year program and the creation of the acronym XAI, which stands for “explainable artificial intelligence.” XAI is an already well-recognized term, denoting an entire field of scientific research attempting to foster better symbiosis between humans and machines by understanding how AI reaches certain conclusions--the main concern in this being whether we should trust a machine’s decisions.

The program includes the research of 11 teams, led by outstanding US universities, that propose different techniques to achieve explainability for two use cases of AI: 1) to explain the system’s recommendations to human analysts, and 2) to explain the system’s decisions in autonomous systems.

Except for mathematical and engineering methods exploring heat map analysis to understand object recognition from image algorithms (UC Berkeley); studying the convolutional layers of a neural network to identify recognizable-by-humans objects in them (MIT); combining visual and textual explanations with generative adversarial nets (UT Austin); training a system to play a game to observe autonomous decision-making, deriving finite state machines from conventional deep learning systems (Oregon); inserting trained models on particular subject matter into larger deep learning networks (Carnegie Mellon); detecting instance novelty for the user (Brown University); adding text generation to exemplify an explanation with causal expressions (Berkeley); and model induction (Rutgers), cognitive scientists and philosophers explore gender trust issues and the psychological literature for insights into cognition and so on.

System evaluation is carried out via novel strategies, for example, measuring user satisfaction based on a cognitive psychology framework, user prediction of a system’s accuracy, and building ontologies to determine when a system is right or wrong. One conclusion from the evaluation pertains to the impact of bad and good explanations. It turns out that a bad explanation tends to have a much worse impact on the outcomes of the system than the good explanation’s positive impact on the system’s performance.

Only halfway through the program, the research demonstrates exciting evidence and promise. A vivid discussion around the selection of the approaches and the expected results follows. A very intriguing topic and detailed and well-structured presentation, this talk is very good for scholars, students, and professionals interested in the future of AI.

Reviewer:  Mariana Damova Review #: CR147314 (2111-0274)
Bookmark and Share
  Editor Recommended
Featured Reviewer
 
 
General (I.0 )
 
 
Human Factors (H.1.2 ... )
 
 
General (I.2.0 )
 
Would you recommend this review?
yes
no
Other reviews under "General": Date
A multi-modal approach for determining speaker location and focus
Siracusa M., Morency L., Wilson K., Fisher J., Darrell T.  Multimodal interfaces (Proceedings of the 5th international conference, Vancouver, British Columbia, Canada, Nov 5-7, 2003)77-80, 2003. Type: Proceedings
Mar 1 2004
Nanotechnology: science and computation (Natural Computing Series)
Chen J., Jonoska N., Rozenberg G., Springer-Verlag New York, Inc., Secaucus, NJ, 2006.  393, Type: Book (9783540302957)
Aug 2 2007
High performance computing for big data: methodologies and applications
Wang C., CRC Press, Inc., Boca Raton, FL, 2018.  286, Type: Book (978-1-498783-99-6), Reviews: (1 of 2)
Apr 4 2019
more...

E-Mail This Printer-Friendly
Send Your Comments
Contact Us
Reproduction in whole or in part without permission is prohibited.   Copyright 1999-2024 ThinkLoud®
Terms of Use
| Privacy Policy