Computing Reviews

"Why should I trust you?":Explaining the predictions of any classifier
Ribeiro M., Singh S., Guestrin C.  KDD 2016 (Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, Aug 13-17, 2016)1135-1144,2016.Type:Proceedings
Date Reviewed: 05/29/20

When Bohr introduced his theory of quantum jumps as a model of the inside of an atom, he said that quantum jumps exist but no one can visualize them. Thus, at the time, the scientific community was outraged because science is all about explaining and visualizing physical phenomena. In fact, “not being able to visualize things seemed against the whole purpose of science” [1]. This paper is dealing with a phenomenon that is very similar to Bohr’s story; however, instead of talking about quantum jumps or what is happening inside an atom, it is talking about interpretable machine learning (IML) or what is happening inside the machine when it is learning facts and making decisions (that is, predictions).

In fact, the new topic of IML is very hot right now [2]. The authors present local interpretable model-agnostic explanations (LIME), a model of IML. First, an agnostic model means that the model could allow explanation of the behavior of the machine without referring to (that is, accessing) its internal parameters. Second, a local interpretable model means that the model acts on the neighborhood of its input values. As a result, LIME can be considered as a “white-box,” which locally approximates the behavior of the machine in a neighborhood of input values. It works by calculating a linear summation of the values of the input features scaled by a weight factor.

I enjoyed this paper--it is very well written and covers a significant fundamental block of IML. I recommend it to any researcher interested in theorizing the basic aspects of IML.


1)

Al-Khalili, J. Atom, “Episode 1.” BBC, 2007.


2)

Du, M.; Liu, N.; Hu, X. Techniques for interpretable machine learning. Communications of the ACM 63, 1(2020), 68–77.

Reviewer:  Mario Antoine Aoun Review #: CR146981 (2010-0247)

Reproduction in whole or in part without permission is prohibited.   Copyright 2024 ComputingReviews.com™
Terms of Use
| Privacy Policy