The book contains eight chapters, each of which includes exercises, and one appendix.
Chapter 1 contains fundamental definitions: concept class, representation class, hypothesis class, and learnability in the probably approximately correct (PAC) model.
Chapter 2 presents fundamental results, such as the Occam’s Razor principle: a concept is PAC-learnable if it is possible to find a short hypothesis consistent with the observed data in polynomial time.
Chapter 3 addresses an important question: can infinite concept classes be learned from finite examples? The Vapnik-Chervonenkis (VC) dimension plays a crucial role here, as in the literature on neural networks.
Chapter 4 treats implications of weak and strong learning.
Chapter 5 deals with learning in the presence of noise.
Chapter 6, on inherent unpredictability, studies whether there exist concept classes with polynomial VC dimension for which there is no efficient PAC learning algorithm.
Chapter 7 discusses reducibility in PAC learning.
Finally, chapter 8 deals with learning finite automata by experimentation.
The appendix introduces basic concepts used in the text, such as the union bound, Markov’s inequality, and Chernoff bounds. With this background, the book is almost self-contained.
The book’s writing style is clear and pleasant, reflecting the current trend toward intuitive, philosophical presentations of complex technical matters. Although readers should not expect to find plug-and-play algorithms, the book is recommended to everyone as a solid introduction to the theoretical aspects of computational learning.