Teaching the theory of computation and complexity is tough. It is especially challenging for those ambitious teachers who wish to provide more than just the basic data that programmers ought to know. In fact, the basic data is quite a small set of facts, including statements like: “there are undecidable problems,” “some algorithms are efficient--they use polynomial time,” “some algorithms are inefficient--they use exponential time,” and “there is a group of important problems for which efficient algorithms may not exist.” To some extent, these facts can be easily explained at a general level in one lecture, and will practically be understood and remembered only this way.
So, how can ambitious teachers fill a semester course? I think that there are two avenues to deal with such a problem. On the one hand, they can present deeper justification for the basic facts, showing the related mathematics in action (that is, the related proofs and/or formalized reasoning). On the other hand, they can present how close the mentioned facts are to the everyday life of a programmer or computer scientist. I am not sure if it is possible to perfectly combine the two approaches. For sure, there are notable examples of classical books on the topic that follow the first path [1,2]. Nevertheless, it is difficult to follow the second avenue without neglecting the first.
MacCormick manages to balance the two approaches, emphasizing the second one. He is aware of this, and thus What can be computed? is easier to read and understand. The approach is obviously not performed without a price. The book seems to be aimed at undergraduates and only introduces basic concepts of computability and complexity. Readers will not find more advanced (even classical) results on space complexity or the structure of the various problem classes (such as arithmetical or polynomial hierarchies). This is not a disadvantage, but a limitation that makes this book useful as an introductory first look at the field. Additionally, as a communications engineer focusing on network design, I see another advantage: for nonexperts, this book presents just the most important issues related to difficulties with solving various problem, not focusing too much on a formalism that is not natural to them. Typically, the authors of books on computational complexity seem to forget that the decision problems on formal languages are not necessarily easy to understand for a large group of specialists, that is, individuals who deal with computer algorithms but have not had a full education in formal languages, or those who are interested in function and optimization problems (as is my case). My impression is that MacCormack’s book will be relatively easy for such readers due to the fact that formalism is kept to a minimum.
The book is divided into three parts: computability (undecidability, formal languages, Turing machines, Turing reductions, finite automata), complexity theory (elements of Bachman-Landau notation, time complexity, complexity classes, polynomial-time reductions, P, NP, NP-completeness, P versus NP), and some additional topics (such as Turing’s original formulation of his computing machine, the mathematical context for Turing’s interest in this machine, and a summary of Karp’s first enumeration of a group of NP-complete problems). However, as emphasized above, what matters is not the set of problems covered, but how MacCormick presents it. A very important advantage is how the author shows the results in the context of practical programming. For instance, he treats uncomputability using some very simple Python examples. This way, even for programming laypeople, everything will be easy to understand and simultaneously the given ideas will be impactful. By using such an approach, MacCormick is able to show that computability and complexity is a daily issue in the computer world. As this is everyday life for almost all contemporary engineers, the successful presentation of such makes me recommend this book.