Computing Reviews
Today's Issue Hot Topics Search Browse Recommended My Account Log In
Review Help
Search
Never-ending learning
Mitchell T., Cohen W., Hruschka E., Talukdar P., Yang B., Betteridge J., Carlson A., Dalvi B., Gardner M., Kisiel B., Krishnamurthy J., Lao N., Mazaitis K., Mohamed T., Nakashole N., Platanios E., Ritter A., Samadi M., Settles B., Wang R., Wijaya D., Gupta A., Chen X., Saparov A., Greaves M., Welling J. Communications of the ACM61 (5):103-115,2018.Type:Article
Date Reviewed: Aug 17 2018

Human beings are continuously learning via exposure to a variety of experiences and situations. Can current computers with limited knowledge bases (KBs) and learning models ever mimic real human intelligence? Mitchell et al. compellingly critique current machine learning efforts and offer significant insight into future artificial intelligence (AI) research.

The authors assume that a true understanding of human and machine learning requires computer programs that behave like humans, that is, learn from diverse experiences and reflect on experiences to develop new learning strategies. They present the never-ending language learner (NELL), a case study of this never-ending learning paradigm. A machine can use NELL to reason beyond the words available in its KB.

A never-ending learning agent is a system that is capable of learning different types of knowledge. It uses “self-reflection to avoid plateaus in performance” and applies “previously learned knowledge to improve subsequent learning.” A never-ending learning agent must solve a set of learning tasks with problematic constraints. In NELL, each learning task is modeled by type of experience, performance task, and performance metric. Thousands of distinct learning tasks are classified into several categories for accurate learning inferences. NELL also contains a collection of over one million coupling constraints that connect the learning tasks.

NELL’s software architecture consists of a KB that serves as a blackboard to facilitate communication between its many learning and inference modules. NELL contains algorithms for learning, integrating proposals into the KB for updates, self-reflection, and self-evaluation.

The authors investigate the extent to which the NELL KB improved over time. Experimental results show: “NELL is successfully learning to improve its reading competence over time, and is using this increasing competence to build an ever larger KB of beliefs about the world.” AI experts should read this paper. The authors offer a great solution to the difficult problem of creating machines that think like humans. So, will machines ever be able to think like humans? I call on the AI community to tackle this question.

Reviewer:  Amos Olagunju Review #: CR146210 (1811-0594)
Bookmark and Share
  Reviewer Selected
Featured Reviewer
 
 
Machine Translation (I.2.7 ... )
 
Would you recommend this review?
yes
no
Other reviews under "Machine Translation": Date
Functional considerations in the postediting of machine translated output
Vasconcellos M. Computers and Translation 1(1): 21-38, 1986. Type: Article
Sep 1 1987
Sentence disambiguation by asking
Tomita M. Computers and Translation 1(1): 39-51, 1986. Type: Article
Feb 1 1988
The lexicon in the background
Sedelow S., Walter A. J. Computers and Translation 1(2): 73-81, 1986. Type: Article
Dec 1 1987
more...

E-Mail This Printer-Friendly
Send Your Comments
Contact Us
Reproduction in whole or in part without permission is prohibited.   Copyright 1999-2024 ThinkLoud®
Terms of Use
| Privacy Policy