The relationships and equivalences between formal language categories and the different models of automata are among the deepest and most beautiful results in theoretical computer science. Given that the construction of programming language compilers and interpreters is one of the foremost applications of formal languages and their grammars (context-free grammars, to be more specific), their combination in a single volume comes naturally. This is exactly what Reghizzi and his colleagues from the Politecnico di Milano have done in their textbook for graduate and senior undergraduate computer science students.

Their book basically consists of four elaborate chapters that follow an unusually short introduction, without the typical fluff that plagues too many introductory chapters, something that their readers will surely appreciate.

Chapter 1 is focused on the syntax of languages. Its 90 pages discuss the aspects of formal language theory that are more relevant for the construction of compilers, namely regular expressions and context-free grammars. Apart from the common content you can find in any of the many good textbooks that exist on automata theory, students will also find an interesting section that includes a short catalog of common language ambiguities and proposes ways to resolve them in order to obtain unambiguous languages suitable for deterministic parsing. Despite their clear and understandable focus on deterministic languages, they conclude their chapter with a short section on the Chomsky hierarchy, the standard classification of formal grammars, language families, and computation models.

In chapter 2, the authors delve into the details of regular language recognizers. Shorter than the previous one, it introduces the details of deterministic and nondeterministic finite automata. In just 50 pages written with great care and rich in detail, readers will discover the correspondence between linear grammars and finite automata, learn how to convert an automata into a regular expression using the Brzozowski and McCluskey algorithm, and how to perform the inverse conversion using Thompson’s structural method or the Berry and Sethi algorithm.

The next chapter is the core of the book: 150 pages on parsing context-free grammars, mainly for deterministic languages so that parsing can be performed in linear time. The chapter studies pushdown automata from a formal point of view before delving into the details of deterministic parsing algorithms. These algorithms are presented for context-free grammars in extended Backus-Naur form (EBNF), a distinctive feature of this textbook with respect to other compiler textbooks. Unfortunately, this choice might not be the most desirable from a pedagogical point of view, since it makes the content harder to understand for those without a suitable background (and those with such background might find too much of the previous material superfluous or even redundant).

Apart from the top-down and bottom-up algorithms for parsing deterministic context-free grammars in EBNF, that is, ELL and ELR parsers, the authors also include a description of Earley’s parsing algorithm for arbitrary context-free grammars, which runs in cubic time for arbitrary context-free grammars, in quadratic time for unambiguous context-free grammars, and in linear time for most LR(*k*) grammars. They also include a section describing a parallel parsing algorithm for Floyd’s operator-precedence grammars based on their own research results. Finally, they conclude their discussion on parsing with some final notes on managing syntactic errors and incremental parsing, two aspects that should be considered from the start by those building compilers and parser generators.

Chapter 4, “Translation Semantics and Static Analysis,” basically contain all of the basic material that is needed to complete the front end of a working compiler but was not discussed in the previous three chapters. Its first three quarters, focused on translation, discuss syntactic translation by finite transducers for regular languages and pushdown transducers for context-free grammars, as well as attribute grammars, the method of choice for compiler designers.

Synthesized and inherited attributes are referred to as left and right attributes in attribute grammars. Performing semantic checks, code generation, and semantics-directed parsing are briefly described as the archetypical applications of attribute grammars. The final section of this chapter turns the reader’s attention to static program analysis. Without intending to be exhaustive, a mere score of pages introduce data flow equations and illustrate how they are used to analyze variable liveness and reaching definitions. The former eases memory allocation by detecting “interferences” and allows the identification of useless definitions (useful for optimization and warning of potential programming mistakes), while the latter is also used in program optimization (for example, constant propagation) and the identification of potential problems (that is, the bad initialization of variables).

There is an abundance of excellent textbooks on both automata theory, such as [1] and [2] (although some might still prefer the original edition [3]), and compilers, including the well-known dragon [4], ark [5], and whale [6] textbooks. Hence, it is difficult for newcomers to obtain a significant share of the textbook market within two of the most established subfields in computer science. However, this self-contained textbook might still be a good alternative if tight schedule constraints in squeezed computer science curricula force students to be acquainted with both automata theory and compilers in a single-semester course.