Computing Reviews
Today's Issue Hot Topics Search Browse Recommended My Account Log In
Review Help
Search
Efficient inference and learning in a large knowledge base
Wang W., Mazaitis K., Lao N., Cohen W. Machine Learning100 (1):101-126,2015.Type:Article
Date Reviewed: Nov 11 2015

With regard to the taxonomy of first-order probabilistic languages (FOPLs) described by Milch et al. [1], “there are two well-known FOPLs whose possible outcomes are not relational structures”: (1) stochastic logic programs (SLPs) and (2) the integrated Bayesian agent language (IBAL). FOPLs combine the “treatment of uncertainty with the ability to describe large models formally and concisely” [1].

In this paper, the authors present a kind of first-order probabilistic language called probabilistic personalized PageRank (ProPPR), which is an extension to SLPs, “well-known FOPLs whose possible outcomes are not relational structures” [1]. According to the results of the experiments they conducted, learning for ProPPR is faster than learning for the Markov logic network (MLN) independently of the size and number of databases in use.

The contribution of this paper is that the authors define and develop an approximate inference and grounding scheme that optimizes the time at both proving a query in a large knowledge database and building a graph, which contains the information of weight learning. Thus, they resolve the scalability problem shared by many probabilistic logics. In comparison with MLN, ProPPR can approximate both inference and learning in time independently of the size of the underlying database. While the cost of MLN at grounding query tends to O(nk), where n is the number of database constants and k is the maximal arity of a predicate, ProPPR, an extension of SLPs, tends to O(1/&agr;&egr;) or less, where &agr; is a reset parameter and &egr; the worst-case approximation. The number O(nk) could quickly increase for big n and big k. If case k tends to infinite, O(nk) will exponentially increase, whereas O(1/&agr;&egr;) would decrease for &agr; and &egr; greater than one, and slowly increase for &agr; and &egr; less than one. Consequently, the proposed ProPPR extremely improves performance over MLN.

This is one of the most well-written papers I have ever read; I cannot find any weakness. The authors provide a judicious tutorial as background, which can help readers understand the proposed solution. Thus, I recommend this paper for students and researchers working particularly on learning, natural language processing, artificial intelligence, algorithms, and generally on algorithmic complexity or computational complexity.

Reviewer:  Thierry Edoh Review #: CR143935 (1602-0144)
1) Milch, B.; Russell, S. First-order probabilistic languages: into the unknown. Springer, 16th International Conference, ILP 2006, Santiago de Compostela, Spain, August 24-27, 2006, revised selected papers, 2006, http://link.springer.com/chapter/10.1007%2F978-3-540-73847-3_3.
Bookmark and Share
  Featured Reviewer  
 
Learning (I.2.6 )
 
 
Algorithms (I.1.2 )
 
 
Natural Language Processing (I.2.7 )
 
 
Artificial Intelligence (I.2 )
 
Would you recommend this review?
yes
no
Other reviews under "Learning": Date
Learning in parallel networks: simulating learning in a probabilistic system
Hinton G. (ed) BYTE 10(4): 265-273, 1985. Type: Article
Nov 1 1985
Macro-operators: a weak method for learning
Korf R. Artificial Intelligence 26(1): 35-77, 1985. Type: Article
Feb 1 1986
Inferring (mal) rules from pupils’ protocols
Sleeman D.  Progress in artificial intelligence (, Orsay, France,391985. Type: Proceedings
Dec 1 1985
more...

E-Mail This Printer-Friendly
Send Your Comments
Contact Us
Reproduction in whole or in part without permission is prohibited.   Copyright 1999-2024 ThinkLoud®
Terms of Use
| Privacy Policy