Computing Reviews

Never-ending learning
Mitchell T., Cohen W., Hruschka E., Talukdar P., Yang B., Betteridge J., Carlson A., Dalvi B., Gardner M., Kisiel B., Krishnamurthy J., Lao N., Mazaitis K., Mohamed T., Nakashole N., Platanios E., Ritter A., Samadi M., Settles B., Wang R., Wijaya D., Gupta A., Chen X., Saparov A., Greaves M., Welling J. Communications of the ACM61(5):103-115,2018.Type:Article
Date Reviewed: 08/17/18

Human beings are continuously learning via exposure to a variety of experiences and situations. Can current computers with limited knowledge bases (KBs) and learning models ever mimic real human intelligence? Mitchell et al. compellingly critique current machine learning efforts and offer significant insight into future artificial intelligence (AI) research.

The authors assume that a true understanding of human and machine learning requires computer programs that behave like humans, that is, learn from diverse experiences and reflect on experiences to develop new learning strategies. They present the never-ending language learner (NELL), a case study of this never-ending learning paradigm. A machine can use NELL to reason beyond the words available in its KB.

A never-ending learning agent is a system that is capable of learning different types of knowledge. It uses “self-reflection to avoid plateaus in performance” and applies “previously learned knowledge to improve subsequent learning.” A never-ending learning agent must solve a set of learning tasks with problematic constraints. In NELL, each learning task is modeled by type of experience, performance task, and performance metric. Thousands of distinct learning tasks are classified into several categories for accurate learning inferences. NELL also contains a collection of over one million coupling constraints that connect the learning tasks.

NELL’s software architecture consists of a KB that serves as a blackboard to facilitate communication between its many learning and inference modules. NELL contains algorithms for learning, integrating proposals into the KB for updates, self-reflection, and self-evaluation.

The authors investigate the extent to which the NELL KB improved over time. Experimental results show: “NELL is successfully learning to improve its reading competence over time, and is using this increasing competence to build an ever larger KB of beliefs about the world.” AI experts should read this paper. The authors offer a great solution to the difficult problem of creating machines that think like humans. So, will machines ever be able to think like humans? I call on the AI community to tackle this question.

Reviewer:  Amos Olagunju Review #: CR146210 (1811-0594)

Reproduction in whole or in part without permission is prohibited.   Copyright 2024 ComputingReviews.com™
Terms of Use
| Privacy Policy