Computing Reviews
Today's Issue Hot Topics Search Browse Recommended My Account Log In
Review Help
Search
A new gradient method with an optimal stepsize property
Dai Y., Yang X. Computational Optimization and Applications33 (1):73-88,2006.Type:Article
Date Reviewed: Oct 31 2006

The numerical treatment of large linear systems of equations is a major field in computational mathematics. There are two classes of methods: direct methods, which deliver the solution vector after a finite number of arithmetic operations (for example, the Gaussian elimination and the conjugate gradient method for positive definite systems), and iterative schemes, which generate a sequence of vectors converging toward the solution (for example, the Jacobi and Gauss-Seidel iterations). For an introduction to these, see Trefethen and Bau’s book [1].

In this paper, the authors propose a new iterative scheme for solving the system Ax=b, where the coefficient matrix A is positive and definite. They point out that this task is equivalent to the problem of minimizing the functional

.

Given an approximate solution xk, the next approximation is defined as xk-&agr;kgk, where gk is the gradient of f at xk, and hence equal to the residual Axk-b of the system. &agr;k is a number to be determined. Hence, the proposed method is related to the steepest descent method, known in numerical optimization (see, for example, Fletcher’s book [2]).

As pointed out in the paper, the optimal choice of &agr;k requires knowledge of the maximum and minimum eigenvalues of A, and this information is not available in general. The main contribution of this paper is that the authors propose a new rule for determining &agr;k. This paper is well organized and well written. It should be accessible to a wide audience, in particular to those working in numerical linear algebra.

Reviewer:  S.-Å. Gustafson Review #: CR133500 (0709-0911)
1) Trefethen, L.N.; Bau, D., III Numerical linear algebra. SIAM, Philadelphia, PA, 1997.
2) Fletcher, R. Practical methods of optimization, vol. 1: unconstrained optimization. John Wiley & Sons, Chichester, UK, 1980.
Bookmark and Share
 
Gradient Methods (G.1.6 ... )
 
 
Eigenvalues And Eigenvectors (Direct And Iterative Methods) (G.1.3 ... )
 
 
Minimax Approximation And Algorithms (G.1.2 ... )
 
 
Approximation (G.1.2 )
 
 
Optimization (G.1.6 )
 
Would you recommend this review?
yes
no
Other reviews under "Gradient Methods": Date
A convergence theorem of Rosen’s gradient projection method
Du D. (ed), Zhang X. Mathematical Programming: Series A 36(2): 135-144, 1986. Type: Article
Jul 1 1988
A class of filled functions for finding global minimizers of several variables
Ge R., Qin Y. Journal of Optimization Theory and Applications 54(2): 241-252, 1987. Type: Article
Feb 1 1988
Nonlinear parameter estimation: an integrated system in BASIC
Nash J., Walker-Smith M., Marcel Dekker, Inc., New York, NY, 1987. Type: Book (9789780824778194)
Jan 1 1989
more...

E-Mail This Printer-Friendly
Send Your Comments
Contact Us
Reproduction in whole or in part without permission is prohibited.   Copyright 1999-2024 ThinkLoud®
Terms of Use
| Privacy Policy