The volume gathers papers in an exciting, novel area of research that started the field of artificial intelligence (AI) with the work of Ray Solomonoff in the 1960s. However, AI research soon forgot some of the central ideas that algorithmic probability and learning theory brings because of associated uncomputability results. This, however, has not stopped progress in the area, both in theory and applications (see, for example, [1]), similar to Peter Bloem et al.’s contribution (last paper in the volume) that provides alternative means to calculate Kolmogorov complexity other than lossless compression algorithms.
The field is reaching some maturity, which allows for further applications. So while the book contains a good deal of theory, it represents only a fraction of the contributions. Most of them are in some way or another related to actual application ideas, learning, and more particularly to areas such as clustering, classification problems, and data analysis. The papers are organized in topical sections: “Inductive Inference,” “Exact Learning from Queries,” “Reinforcement learning,” “Online learning and Learning with Bandit Information,” “Statistical Learning Theory,” and “Privacy, Clustering, MDL, and Kolmogorov Complexity.”
My own work is deeply related to the power of algorithmic probability, so expect this review to be biased in favor of this field. Nevertheless, I do think this is and will become an ever-increasing important field of research in the years to come, as we learn that this sometimes obscure learning theory answers many long-standing questions in the history of inference challenges and questions related to philosophy, probability theory, and human and artificial cognition.