Computing Reviews
Today's Issue Hot Topics Search Browse Recommended My Account Log In
Review Help
Search
Conditioning in probabilistic programming
Olmedo F., Gretz F., Jansen N., Kaminski B., Katoen J., Mciver A. ACM Transactions on Programming Languages and Systems40 (1):1-50,2018.Type:Article
Date Reviewed: Jun 13 2018

Machine learning, possibly contrary to popular belief, is not just about endless variations of neural networks. There is also a thriving subculture of probabilistic programming based on Bayesian principles. A large advantage of the latter approach is that most models provide some level of explanation of what was learned, whereas extracting such information from a neural network is (very) active research.

In probabilistic programming, conditioning--appropriately called “a main feature” here--plays a central role. Thus it is quite important to understand it well, which is the domain of inquiry of the paper at hand. The authors make the choice of working over an imperative language with discrete choices (via nondeterminism). The nature of their imperative language brings in issues of nontermination. One of the interesting discussion points is the semantic difference between nontermination and nondeterminism. Some semantics do not differentiate between these, which the authors claim is problematic. Instead, a relative semantics that computes “probability of success + termination” over “probability of termination” is given. Once this design choice and the choice of using Dijkstra-style weakest precondition semantics are made, quite a lot of the rest follows quite naturally. Of course, there are many details that need to line up just right for everything to work, and this is all detailed.

While parts of the paper are quite technical, the authors take great pains to also motivate and illustrate everything they do. They provide many well-chosen examples to illustrate their point. When appropriate, figures are also used to give a more visual explanation. The examples given are quite concrete, accompanied by fully explicit computations. This really helps explain all the steps of what could otherwise be seemingly arbitrary rule sets.

Perhaps the main disappointment I have is that Narayanan and Shan’s work on disintegration in Hakaru [1] is not mentioned at all. Perhaps this is because the work is more concerned with the continuous case (from their perspective, the discrete case is too easy).

Anyone interested in the semantics of probabilistic programming languages should read this paper. It is a model of clarity. While the various choices taken here may not necessarily stand the test of time, they all make sense and are well justified.

Reviewer:  Jacques Carette Review #: CR146078 (1808-0434)
1) Narayanan, P.; Shan, C.-C. Symbolic conditioning of arrays in probabilistic programs. In Proceedings of the ACM on Programming Languages (Vol. 1) (2017), Article No. 11, http://doi.acm.org/10.1145/3110255.
Bookmark and Share
  Reviewer Selected
Featured Reviewer
 
 
Probabilistic Computation (F.1.2 ... )
 
 
Semantics Of Programming Languages (F.3.2 )
 
Would you recommend this review?
yes
no
Other reviews under "Probabilistic Computation": Date
Discrete random process stabilization
Lorenc A., Lapins J. Information and Control 58(1-3): 1-18, 1984. Type: Article
May 1 1986
Probabilistic inductive inference
Pitt L. Journal of the ACM 36(2): 383-433, 1989. Type: Article
Sep 1 1989
Explorations in quantum computing
Williams C. (ed), Clearwater S. (ed), TELOS, Santa Clara, CA, 1998. Type: Book (9780387947686)
Jun 1 1998
more...

E-Mail This Printer-Friendly
Send Your Comments
Contact Us
Reproduction in whole or in part without permission is prohibited.   Copyright 1999-2024 ThinkLoud®
Terms of Use
| Privacy Policy