Computing Reviews
Today's Issue Hot Topics Search Browse Recommended My Account Log In
Review Help
Search
"Why should I trust you?": Explaining the predictions of any classifier
Ribeiro M., Singh S., Guestrin C.  KDD 2016 (Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, Aug 13-17, 2016)1135-1144.2016.Type:Proceedings
Date Reviewed: May 29 2020

When Bohr introduced his theory of quantum jumps as a model of the inside of an atom, he said that quantum jumps exist but no one can visualize them. Thus, at the time, the scientific community was outraged because science is all about explaining and visualizing physical phenomena. In fact, “not being able to visualize things seemed against the whole purpose of science” [1]. This paper is dealing with a phenomenon that is very similar to Bohr’s story; however, instead of talking about quantum jumps or what is happening inside an atom, it is talking about interpretable machine learning (IML) or what is happening inside the machine when it is learning facts and making decisions (that is, predictions).

In fact, the new topic of IML is very hot right now [2]. The authors present local interpretable model-agnostic explanations (LIME), a model of IML. First, an agnostic model means that the model could allow explanation of the behavior of the machine without referring to (that is, accessing) its internal parameters. Second, a local interpretable model means that the model acts on the neighborhood of its input values. As a result, LIME can be considered as a “white-box,” which locally approximates the behavior of the machine in a neighborhood of input values. It works by calculating a linear summation of the values of the input features scaled by a weight factor.

I enjoyed this paper--it is very well written and covers a significant fundamental block of IML. I recommend it to any researcher interested in theorizing the basic aspects of IML.

Reviewer:  Mario Antoine Aoun Review #: CR146981 (2010-0247)
1) Al-Khalili, J. Atom, “Episode 1.” BBC, 2007.
2) Du, M.; Liu, N.; Hu, X. Techniques for interpretable machine learning. Communications of the ACM 63, 1(2020), 68–77.
Bookmark and Share
  Featured Reviewer  
 
Classifier Design And Evaluation (I.5.2 ... )
 
 
Human Factors (H.1.2 ... )
 
 
General (H.0 )
 
 
General (I.0 )
 
Would you recommend this review?
yes
no
Other reviews under "Classifier Design And Evaluation": Date
Linear discrimination with symmetrical models
Bobrowski L. Pattern Recognition 19(1): 101-109, 1986. Type: Article
Feb 1 1988
An application of a graph distance measure to the classification of muscle tissue patterns
Sanfeliu A. (ed), Fu K., Prewitt J. International Journal of Pattern Recognition and Artificial Intelligence 1(1): 17-42, 1987. Type: Article
Dec 1 1989
Selective networks and recognition automata
George N. J., Edelman G.  Computer culture: the scientific, intellectual, and social impact of the computer (, New York,2011984. Type: Proceedings
May 1 1987
more...

E-Mail This Printer-Friendly
Send Your Comments
Contact Us
Reproduction in whole or in part without permission is prohibited.   Copyright 1999-2024 ThinkLoud®
Terms of Use
| Privacy Policy