Computing Reviews
Today's Issue Hot Topics Search Browse Recommended My Account Log In
Best of 2016 Recommended by Editor Recommended by Reviewer Recommended by Reader
A survey of methods for explaining black box models
Guidotti R., Monreale A., Ruggieri S., Turini F., Giannotti F., Pedreschi D.  ACM Computing Surveys 51 (5): 1-42, 2019. Type: Article
Date Reviewed: Jan 25 2019

Computerized decision support systems have significant social consequences, and yet they are capable of mistakes or bias. Can an autonomous driving system be trusted, for example, when its visual scene recognition was implemented as a neural network trained on a dataset of traffic images? What about a college application system that automatically culls thousands of applications so that the admissions staff only needs to peruse a few hundred? One approach to gaining confidence in such systems, or at least to understanding their limitations, is to find some method to explain their inner workings. The problem is that they are opaque black boxes, when constructed by an automated data-driven process.

The rise of automated decision systems has been met by a profusion of research papers proposing approaches to explain them. This survey has 143 citations to such papers. While it contains brief summaries of the approaches, its main purpose is to set up a taxonomy based on the applicability of the explanation methods to particular types of systems. It is meant to help a researcher looking for a suitable method by culling the body of papers to look at. One is struck by the similarity of the decision support evaluation problem in general to the particular problem of selecting an explanation method for it. In fact, the authors have seemingly made some use of their recommended explanation taxonomy by choosing to use visual representations of decision trees to explain the taxonomy itself. The figures and tables are quite helpful in understanding the paper.

The results of the taxonomy are reported in five tables that classify 76 methods according to the type of explanation method (for example, decision rules), the implementation type (for example, deep neural network), and the type of input data used (tabular, images, or text). The different tables correspond to the different purposes of the explanation, ranging from an overview of the system logic to a rationale for particular outcomes. It is left to the individual research papers to argue for the success of their methods.

Overall, the paper is daunting due to the breadth and diversity of the area that it covers. It includes a thoughtful discussion of the risks of automated decision systems and the need to understand them. The explanation task is inherently limited by the fact that no human-understandable model can be more than an approximation of the systems in question.

Reviewer:  Jon Millen Review #: CR146398
Bookmark and Share
  Editor Recommended
Decision Support (H.4.2 ... )
Introductory And Survey (A.1 )
Would you recommend this review?
Other reviews under "Decision Support": Date
Compact flow diagrams for state sequences
Buchin K., Buchin M., Gudmundsson J., Horton M., Sijben S.  Journal of Experimental Algorithmics 221-23, 2017. Type: Article
May 31 2018
Decision support in tourism based on human-computer cloud
Smirnov A., Ponomarev A., Levashova T., Teslya N.  iiWAS 2016 (Proceedings of the 18th International Conference on Information Integration and Web-based Applications and Services, Singapore,  Nov 28-30, 2016) 125-132, 2016. Type: Proceedings
May 31 2017
Recommender systems: the textbook
Aggarwal C.,  Springer International Publishing, New York, NY, 2016. 498 pp. Type: Book (978-3-319296-57-9)
Feb 10 2017

E-Mail This Printer-Friendly
Send Your Comments
Contact Us
Reproduction in whole or in part without permission is prohibited.   Copyright © 2000-2019 ThinkLoud, Inc.
Terms of Use
| Privacy Policy