Computing Reviews

The seven tools of causal inference, with reflections on machine learning
Pearl J. Communications of the ACM62(3):54-60,2019.Type:Article
Date Reviewed: 03/23/20

There are three obstacles to meeting the increasing expectations for artificial intelligence (AI), according to this article: the lack of adaptability or robustness; the lack of explainability; and “the lack of understanding of cause-effect connections.” This article is mainly concerned with the latter. The author asserts that an intelligent system should be able to answer such questions as “What would have happened if I had acted differently?” The author’s claim is that all three obstacles and the answering of such questions can be overcome by using causal reasoning.

The article gives an overview of causal reasoning and structured causal models (SCM), emphasizing a three-level hierarchy in which each level is capable of answering different types of questions. The first level manages association--questions such as “What does a symptom tell me about a disease?” The second level, ”intervention,” manages questions like “What if we ban cigarettes?” The third level manages counterfactuals, dealing with questions like “Would Kennedy be alive had Oswald not shot him?”

The tools used for causal reasoning include graphical models that show the causal relationships between variables and the “do-calculus,” which simulates physical interventions where the distribution resulting from a specific action is predicted. The article’s extended example illustrates the process and presents “a bird’s-eye view of seven tasks accomplished through the SCM framework.”

A critical property of Pearl’s system is that it is effectively computable. The article provides a lucid introduction to the ideas and is recommended to all AI workers.

Reviewer:  J. P. E. Hodgson Review #: CR146938 (2008-0196)

Reproduction in whole or in part without permission is prohibited.   Copyright 2024 ComputingReviews.com™
Terms of Use
| Privacy Policy