What should be in a first course on the theory of computation? For 50 years, there has only been one answer: formal grammars and their corresponding automata. Hopcroft and Ullman’s book has been the standard [1], but there are dozens if not hundreds of other texts that try to cover the same material in more student-friendly terms, for example, by using nicer graphics, simpler language, giving more examples and exercises, and suppressing more complicated proofs. This material was considered essential because it led to the great victory of theory--reducing the construction of compilers to an algorithmic process. (In fact, Knuth’s multivolume work was originally projected to be a book on how to construct a compiler [2].)
There is an alternate (and ever older) view that computation theory is about what can and cannot be computed. Martin Davis’s book is a classic example [3]. In a more modern view, theory should also include complexity, that is, not just is something computable but what resources are necessary and sufficient to compute that thing.
The book under review takes the complexity view and demonstrates that a very reasonable undergraduate text can be developed using this perspective.
The book is divided into three parts: computability, complexity, and history. The author takes a practical approach to computability by starting with a programming language (Python) and using this language as his first model of computation. There are several examples to show that various things are computable by giving and explaining some Python programs. (These and other programs are available for download from the author’s website.)
MacCormick then says that something is “computable” exactly when it can be computed by a program with unbounded memory. This leads to a fairly straightforward proof that the halting problem for programs cannot be computed by a program and hence there are some things that are not computable. Turing machines are introduced, as well as the idea of universal machines. (Current students are not surprised by universality because they already believe that computers can be programmed to compute anything, and many students have at least heard of virtual machines.) Reductions are then used both to show that certain things are computable and to show that other things are not computable. The author introduces nondeterminism and argues that it is a reasonable model of current computational practice. Of course, he shows that nondeterminism does not change what is computable. The final chapter of this section takes up finite automata. While nondeterminism does not change what is finite-state computable, it may incur the cost of making the deterministic machine significantly larger than the nondeterministic machine. The equivalence of regular expressions and finite automata is sketched. Finally, finite automata are shown to be incapable of computing certain things that can be computed by programs or Turing machines. (Although not part of the book, a chapter on context-free languages is available on the author’s website.)
Part 2 focuses on the running time of programs. Major ideas include the use of big O notation and the idea of complexity classes. The possibility of quantum computation is mentioned. Poly and Expo classes are defined and Poly is shown to be a proper subset of Expo. It is argued that these and some other classes are the same for all (reasonable) models of computation, but run times for programs will be model dependent. Eventually, the author defines NP and NP-complete and gives examples of using mapping reductions to show that certain problems are NP-complete. This section finishes with a discussion of the practical implications of the P versus NP question.
The third part covers some history, including excerpts from the classic papers of Turing and Karp, and explanations of the meaning of these passages. There is also a discussion of Gödel’s results on the undecidability and incompleteness of arithmetic.
My quibbles: (1) the use of nonstandard definitions leads to complications maintaining consistency; (2) the book avoids recursion and induction, which I believe are central to computation; and (3) the author assumes no course in discrete mathematics--some background in symbol manipulation and some simple proof strategies seem, to me, to be a necessary prerequisite for a theory of computation course.
Overall, if you are planning to teach a theory course with few prerequisites, you should consider What can be computed? as a possible textbook.