Computing Reviews
Today's Issue Hot Topics Search Browse Recommended My Account Log In
Review Help
Search
The matrix eigenvalue problem (1st ed.): GR and Krylov subspace methods
Watkins D., Society for Industrial and Applied Mathematics, Philadelphia, PA, 2007. 452 pp. Type: Book (9780898716412)
Date Reviewed: Oct 27 2008

Many numerical problems can be restated in matrix form, often simplifying not only the mathematical documentation, but also the solution implementation. Yet, this is one area that appears to be underutilized in practice. I don’t know the reason for this, although I could speculate to no end. As can be inferred from the title, this book is about real and complex two-dimensional (2D) matrices, so do not expect matrix algorithms in general n dimensions.

The first two chapters provide a summary of basic matrix types and operations that can be performed on them--preliminary material that Watkins hopes the reader already knows. They provide a quick and convenient reference to be consulted as one reads through the book; to be honest, I used these chapters quite regularly. Chapter 3 is about introducing zeros into your matrix. It describes the techniques and methods used to reduce matrices to either triangular or Hessenberg form, using reflectors, rotators, and Gauss transformations. It ends by describing GR decomposition of a matrix and how elimination matrices can be used to reduce any matrix to Hessenberg form.

Chapter 4 covers iterations that can be used in conjunction with reduction techniques to successively approximate the reduction to triangular form. It describes the use of shifts to accelerate convergence, and details the use of the bulge chase. Each iteration of the GR algorithm destroys the structure of the matrix being operated on. The bulge-chase algorithm is there to return the matrix back to its original structure by progressively introducing zeros into the desired locations.

The next chapter takes a slightly more theoretical approach, as it proves convergence of the GR algorithm, identifying cases under which convergence will be quadratic and the special instances where convergence is cubic. Chapter 6 steps back and reformulates the theory and algorithms from chapter 4 for the generalized eigenvalue problem (Av = aBv) where the two matrices are transformed into an upper Hessenberg and a triangular matrix pair. These are then used in a generalization of the GR algorithm, with a two-stage bulge chase to return the matrices to their original forms.

Chapter 7 explores the bulge chase, starting with an explanation of how shifts that may have been applied are propagated along the matrix within the bulge. Some numerical examples are provided that show the relative error as a function of number of iterations. Shift blurring is also illustrated through simulations that show the relative error increasing as the number of concurrent shifts increases. To make sure the reader understands the process, Watkins describes bulge propagation in reverse, and also shows how a forward and reverse bulge chase can be applied concurrently.

One often needs the eigenvalues of a product of matrices. Chapter 8 starts by showing how multiplying the matrices before computing the eigenvalues can result in sizable loss of precision if any of the matrices are ill conditioned. Having set the stage for why one would want to do so, Watkins describes the GR algorithm for a product of matrices. Singular value decomposition of a matrix is shown to be equivalent to computing the eigensystem of the matrix and its complex conjugate transpose. Being a product of two matrices, it fits nicely within the general GR algorithm for product of matrices, and the discussion leads to the differential quotient-difference algorithm. A significant amount of effort goes into explaining the algorithm for a Hamiltonian matrix, with details for both the real and complex eigenvalue cases. Also, unitary matrices that can be expressed as products of rotator matrices are covered in some detail.

What if we need to deal with very large sparse matrices? This is what the final chapter looks at. Many of the techniques described up to this point do not work well with matrices that can barely fit into system memory, so there is a need for algorithms that specifically cater to these types of problems. Krylov subspace methods attempt to approximate eigenvalues by building subspaces through repeated multiplication of some initial vector by the original matrix. Restarts of the algorithm are needed when convergence is failing or when the amount of available memory runs out. Two restart methods are described, before showing how they still ensure convergence. There are many formulations of how the vectors in the Krylov subspace are constructed. Watkins discusses Arnoldi and both symmetric and unsymmetric Lanczos processes, showing how the restarts for each are effected. Moving on to how to deal with large Hamiltonian and symplectic matrices, the chapter ends with extracting eigenvalues out of large matrix products.

My overall impressions are positive. The book contains an excellent mix of theory and practice. Not only are key proofs and theories provided, but implementation details are also discussed, right down to providing pseudocode for some algorithms and even a few convergence and error-propagation simulation studies. Another unique aspect of this book is that the majority of the proofs are developed in a multitude of exercises. Personally developing a proof, with step-by-step instructions, helps you understand it better, and it also significantly improves the readability of the main text. Would I recommend this book? I have already done so to a number of my colleagues.

Reviewer:  Bernard Kuc Review #: CR136186 (0909-0820)
Bookmark and Share
  Reviewer Selected
Editor Recommended
Featured Reviewer
 
 
Eigenvalues And Eigenvectors (Direct And Iterative Methods) (G.1.3 ... )
 
 
Numerical Algorithms And Problems (F.2.1 )
 
 
Numerical Linear Algebra (G.1.3 )
 
Would you recommend this review?
yes
no
Other reviews under "Eigenvalues And Eigenvectors (Direct And Iterative Methods)": Date
On two more Eigenvalue methods for an alternating sequential parallel system
Wallach Y. Computing 32(1): 33-41, 1984. Type: Article
Feb 1 1985
Bounds for the Positive Eigenvectors of Nonnegative Matrices and for their Approximations by Decomposition
Courtois P., Semal P. Journal of the ACM 31(4): 804-825, 1984. Type: Article
Jun 1 1985
Solution of large, dense symmetric generalized eigenvalue problems using secondary storage
Grimes R., Simon H. (ed) ACM Transactions on Mathematical Software 14(3): 241-256, 1988. Type: Article
Mar 1 1989
more...

E-Mail This Printer-Friendly
Send Your Comments
Contact Us
Reproduction in whole or in part without permission is prohibited.   Copyright 1999-2024 ThinkLoud®
Terms of Use
| Privacy Policy