Mathematical logic historically had its roots in mathematics and philosophy, and is one of the foundations of computer science (CS). All three disciplines continue to contribute to the development of the field, and each provides a different perspective. The authors of this book (Donald Loveland, a computer scientist; Richard Hodel, a mathematician; and Susan Sterrett, a philosopher) have taught a multidisciplinary course on logic for more than a decade. They wrote this book based on that course.
The book is divided into three parts. Part 1, “Proof Theory,” covers propositional and predicate calculus, leading to most of a proof of the completeness of resolution theorem proving for predicate calculus (the remaining parts of the proof are in the exercises). A chapter and two appendices on Prolog, and appendices on induction and Tarskian semantics for predicate calculus, are also included. Part 2, “Computability Theory,” covers register machines and recursion theory, and Part 3, “Philosophical Logic,” covers Anderson and Belnap’s relevance logic: its philosophical motivation, proof theory, and semantics.
The distinction between CS and math in the first two parts is not very marked; as it happens, the computer scientist wrote Part 1 on proof theory and the mathematician wrote Part 2 on computation theory, but it could just as easily have been the other way around. Overall, the material covered in these two parts is a little off the beaten track relative to most mathematical logic courses--in particular, the proof of completeness for resolution rather than for natural deduction, and the omission of both set theory and model theory.
As a computer scientist, I would have liked to have seen more CS in the first two parts. The Davis–Putnam–Logemann–Loveland (DPLL) algorithm for satisfiability in propositional calculus, which is very widely used in practice, is mentioned but not described (the student is advised to Google it if interested). I found this rather strange considering one of the authors (Donald Loveland) is the second L in the algorithm’s name. There is no discussion of the state of the art in SAT solving, in automated theorem proving, or in automated verification, all subjects that would surely be of interest to CS students taking the course. The bad old tradition among theorists of writing algorithms with gotos rather than loops persists 50 years after Dijkstra’s classic manifesto.
Part 3 is substantially off the beaten track of logic courses. I found it interesting from a technical standpoint. There is a very careful and readable account of how the rules for natural deduction proof in propositional relevance logic can be arrived at by restricting the power of rules for natural deduction in classical propositional logic, and an account of how this can be given a semantics in a four-valued truth system.
I found it less satisfying from a philosophical and historical standpoint. The attitude toward Anderson and Belnap is rather worshipful; earlier work on relevance logic by Orlov, Ackermann, and Church, and earlier nonclassical logics such as intuitionistic logic are not mentioned. Moreover, the primary justification given here for preferring relevance logic to classical logic is that it is closer to usage in natural language. However, the book does not point out that, though the implication of relevance logic may be closer to the English “if/then” than material implication, it is not very close. “A → (A V B)” is valid in relevance logic (and every other logic I know of), but “If Paris is in France then either Paris is in France or Bambi is a gorilla” would be a very anomalous utterance. Whether a sentence in English involving “if/then” sounds reasonable involves many issues of pragmatics (in the linguistic sense) and will not be very adequately reflected in any symbolic logic.
All in all, for undergraduate logic students, there is certainly value in learning about resolution theorem proving and nonclassical logics. If that is the material an instructor wants to cover, this book is very well suited for the purpose.