The term “linear systems” in the title refers to equations of the form Ax = b, where x and b are elements of a Euclidean space and A is a nonsingular matrix. Iterative methods for solving these systems have been used since well before the advent of computers. They are especially efficient for large systems where restrictions on time or space make direct solution methods impractical. Each iteration of these procedures requires not much more computation than one matrix-vector product, and storage is required only for the matrix and a few vectors. Although convergence is only linear, an acceptable approximation to the solution can be obtained faster than direct methods provided the iterative method is well matched to the problem.
According to the preface, this text grew out of material previously used in numerical linear algebra courses. The topics include basic iterative methods such as Jacobi and Gauss-Seidel, as well as the more complex multigrid and domain decomposition methods. The authors introduce preconditioners and iteration matrices early on and use them in describing and analyzing the methods. An entire chapter is devoted to Toeplitz systems and their solution with circulant matrices as preconditioners. The final chapter describes particular boundary-value problems and their approximation by linear algebraic systems.
In its focus on Krylov subspace sequences, this work differs from other textbooks on iterative methods [1]. A Krylov sequence (K0, K1, …) of nested subspaces is determined by a matrix B and a single vector f. Specifically, if K0=span(f) then Ki+1= span(f, Ki, BKi). The matrix B is often A, the matrix of the system or some combination of A with the preconditioner. The vector f is usually an element of the range space of A, such as b. The authors use these sequences to describe and analyze iterative methods. As an example, for the minimum residual method (MNRES), the matrix for the Krylov spaces is A, the vector Ax0-b for some initial estimate x0. The ith iterate of MNRES is found by minimizing the norm of Ax-b for x in Ki.
They also give a result from Chou [2], which shows that Krylov sequences are optimal in a certain sense. That is, if you define another nested sequence of subspaces (L0, L1, …) and propose an algorithm where each iterate zi minimizes the norm of the residual over Li, and if the new algorithm takes k iterations to achieve |Azk-b| < ε, then there is a version of MNRES using the Krylov sequence that will achieve the same accuracy in at least 2k+1 iterations. The new algorithm may be faster, but not by orders of magnitude.
Although the authors have tried to achieve a textbook style of exposition rather than that of a research paper, they present these iterative methods and convergence results as a sequence of topics without much motivation or connecting exposition. In the chapter on Toeplitz matrices, there should be a paragraph or two explaining why Toeplitz matrices form a very important class of matrices. Also, there is no need for a proof of the optimality result for Krylov sequences. It is more important to explain the significance of the result.
I did like the historical note in the preface, where I discovered that Krylov’s first use of the subspace sequence was in a 1931 paper on a method to determine the characteristic polynomial of a matrix.