The numerical treatment of large linear systems of equations is a major field in computational mathematics. There are two classes of methods: direct methods, which deliver the solution vector after a finite number of arithmetic operations (for example, the Gaussian elimination and the conjugate gradient method for positive definite systems), and iterative schemes, which generate a sequence of vectors converging toward the solution (for example, the Jacobi and Gauss-Seidel iterations). For an introduction to these, see Trefethen and Bau’s book [1].
In this paper, the authors propose a new iterative scheme for solving the system Ax=b, where the coefficient matrix A is positive and definite. They point out that this task is equivalent to the problem of minimizing the functional
.
Given an approximate solution xk, the next approximation is defined as xk-&agr;kgk, where gk is the gradient of f at xk, and hence equal to the residual Axk-b of the system. &agr;k is a number to be determined. Hence, the proposed method is related to the steepest descent method, known in numerical optimization (see, for example, Fletcher’s book [2]).
As pointed out in the paper, the optimal choice of &agr;k requires knowledge of the maximum and minimum eigenvalues of A, and this information is not available in general. The main contribution of this paper is that the authors propose a new rule for determining &agr;k. This paper is well organized and well written. It should be accessible to a wide audience, in particular to those working in numerical linear algebra.