Computing Reviews
Today's Issue Hot Topics Search Browse Recommended My Account Log In
Review Help
Search
Reasoning web: causality, explanations and declarative knowledge : 18th International Summer School 2022, Berlin, Germany, September 27–30, 2022, tutorial lectures
Xiao G., Bertossi L., Springer International Publishing, Cham, Switzerland, 2023. 211 pp. Type: Book (9783031314131)
Date Reviewed: Oct 15 2024

Artificial intelligence (AI) is experiencing explosive growth in academia, industry, and society at large. This impressive growth is driven primarily by machine learning, and in particular deep neural networks, which have shown an unexpected ability to match and even surpass human decision-making in many domains. However, this impressive progress comes at a significant cost: these machine learning models are black boxes, that is, they are inscrutable in their operation. While this price is affordable in certain areas (for example, speech recognition, gaming, and so on), it is intolerable in high-stakes applications where a bad decision can cause financial loss, discrimination, or even death. In these domains, control over decisions is critical, and novel systems that have comparable performance to current machine learning models but are also able to explain the reasoning behind the decisions are of paramount importance.

The quest for explainability of machine learning models has led to the development of many and varied techniques for providing some sort of justification for a decision. However, providing an explanation is a tricky business, because if the explanation does not satisfy a requirement of logicality, then it is only seemingly explanatory, thus leading users to a false sense of trust that may turn out to be more harmful than the direct use of black boxes. Logical rigor is required for explanations, which calls for the resurrection of methods that are the pillars of AI (though the abrupt expansion of black box models has overshadowed them a bit).

This book is about the use of well-founded, logic-based methods for explaining decisions made by machine learning models, as well as methods for representing and processing knowledge transparently. The three keywords in the subtitle are indicative of the book’s mission: “causality” is a cornerstone in the new field of explainable AI, whose goal is to endow intelligent systems with the property of “explainability”; “declarative knowledge” is the foundation of transparent models that are intrinsically interpretable. The book collects six tutorial chapters covering the program of the 2022 edition of the Reasoning Web Summer School, whose broad theme was “Reasoning in Probabilistic Models and Machine Learning.”

The initial chapters of the book have a strong tutorial approach, with plenty of bibliographic references for in-depth study. The first chapter in particular delineates a framework where the main concepts related to explainability are defined. Key concepts like abduction, causality, attribution, and counterfactuals are gently introduced and then defined in a rigorous and coherent way. Simple examples guide the reader in uncovering the meaning of the definitions, their differences, and relations.

The second chapter, on the other hand, focuses on formal explainability in AI. This is a large chapter (a mini-book itself) that, starting from the preliminary notions of logic-based notation and calculus, takes the reader on a long journey that includes intermediate stages such as the concept of formal explainability, the computation of explanations, the issue of computational complexity, interactions with the user (through explainability queries), probabilistic extensions to account for the cognitive limitations of users, explanations with surrogate models (that is, simpler models that replace, locally or globally, black box models), and finally to open questions and future developments. This chapter, while adopting a tutorial approach, requires the reader to embrace a mindset on formal logic, and the content is better received with some prior knowledge of computational complexity theory.

The third chapter emphasizes the importance of causal inference in explainable AI as it enables counterfactual reasoning and contrastive explanations. Causality is highly relevant in domains such as healthcare, medicine, and the social sciences, among others, because it can confidently determine the effectiveness of a particular course of action. In machine learning, the classical methods for determining and processing causal relations (mainly based on randomized controlled trials) clash with the availability of observational data only. Yet working on causal relations with observational data is at least partially feasible, for example, by using Pearl’s graphical models if causal relations are known a priori (or they can be partially estimated from data; a topic that is not covered in the chapter). The chapter also covers Rubin’s potential outcome framework for making causal inference on observational data provided that some statistical assumptions hold. The chapter ends with a discussion on how causal inference can aid in determining fairness and promoting explainability in machine learning models.

The remaining chapters focus on more specialized topics. The fourth chapter introduces an extension of the answer set programming (ASP) language--a form of declarative programming--with probability distributions over solutions (answer sets), allowing an intuitive and error-tolerant representation of problems that require both logical and probabilistic reasoning. Within the framework of declarative languages, the fifth chapter introduces Vadalog, an extension of the rule-based language Datalog, which provides a language for performing complex logical reasoning over knowledge graphs. The chapter gently introduces the main concepts of Datalog and Vadalog in the context of economic and financial environments. Then the chapter delves into several extensions to make Vadalog work in complex workflows that may include machine learning, with a special focus on business applications. Finally, some business case studies are reported, such as corporate governance, media intelligence, supply chain, and finance. The final chapter, a bit shorter than expected, points to multimodality in knowledge graph inference and discovery.

Overall, the book is highly recommended for a variety of readers. Undergraduate and graduate students should find it useful, especially the initial chapters, while the later chapters are best suited for a specialized audience. As an additional benefit, a YouTube playlist is available, which includes video lectures on the topics covered by the book and more.

Reviewer:  Corrado Mencar Review #: CR147827
Bookmark and Share
  Reviewer Selected
Editor Recommended
Featured Reviewer
 
 
WEB (H.5.3 ... )
 
 
Artificial, Augmented, And Virtual Realities (H.5.1 ... )
 
 
Knowledge Acquisition (I.2.6 ... )
 
 
Web (I.7.1 ... )
 
 
Artificial Intelligence (I.2 )
 
Would you recommend this review?
yes
no

E-Mail This Printer-Friendly
Send Your Comments
Contact Us
Reproduction in whole or in part without permission is prohibited.   Copyright 1999-2024 ThinkLoud®
Terms of Use
| Privacy Policy