Computing Reviews
Today's Issue Hot Topics Search Browse Recommended My Account Log In
Review Help
Search
Vector extrapolation methods with applications
Sidi A., SIAM-Society for Industrial and Applied Mathematics, Philadelphia, PA, 2017. 447 pp. Type: Book (978-1-611974-95-9)
Date Reviewed: Apr 30 2020

Like his papers in the field, Sidi’s Vector extrapolation methods with applications offers a wide overview of extrapolation techniques to improve the convergence of a given sequence of vectors, often obtained from an iterative method to solve a large sparse linear system (often obtained from the discretization of a partial differential equation (PDE)).

Besides introducing methods and analysis, the book contains valuable computer code (addressed below). The methods are quite general: no need to know the PDE or the matrix or its spectrum. Still, the success of the method does depend on the spectrum of the iteration matrix. If all its eigenvalues have the same magnitude (as in successive over-relaxation (SOR) for the Poisson equation), there is no hope to “kill” them all. If, on the other hand, there are only a few big eigenvalues, vector extrapolation could “kill” them, leading to a rapid convergence. Indeed, it automatically designs the best linear combination that avoids these bad eigenvectors.

This may happen in some difficult and sensitive problems with a highly oscillatory solution, like the Helmholtz equation. It makes sense to use a good iterative method like multigrid. On a uniform grid, you could then use an a priori formula to evaluate the spectrum and realize in advance how suitable it really is for vector extrapolation. Even on an unstructured mesh, where the spectrum is no longer available, vector extrapolation still works well.

Nonlinear PDEs like Maxwell and Schrödinger could be linearized beforehand and reduced to the previous situation: use an outer Newton iteration (with real arithmetic, as in Ascher and Fibich). Although this is rather expensive, it is still well parallelizable.

Unfortunately, vector extrapolation may require a lot of memory to store many vectors. To avoid this, use Saad’s oblique projection: store only a few latter vectors and project on them. For each new vector introduced, one old vector drops. Alternatively, use cycles: start with a few initial iterations, then use a few more iterations with vector extrapolation on top, then restart, and so on.

This works well in 2D, where there are not too many bad eigenvalues. Fortunately, most laser applications are in 2D. Still, Bose-Einstein condensates (BECs) use 3D (Olmert). Likewise, the Maxwell system is often solved in 3D and the Hartree-Fock system in quantum chemistry uses 3D as well.

In 3D, the previous tricks may no longer work--there are just too many bad eigenvalues. To “kill” them all, you must stick to the original classical version and store a lot of vectors. Fortunately, Sidi’s code is most efficient. It uses a standard array (direct indexing) and an elegant QR decomposition, easily rewritten in C++ and easily converted to the (mathematically equivalent) generalized minimal residual method (GMRES) by introducing small Givens matrices to help triangularize the Lanczos matrix.

Still, a general method can never be optimal. To improve performance yet more, you better do some more specific work. Go back to your original PDE (if you can) and redesign a more accurate numerical scheme that uses fewer degrees of freedom. With adaptive finite elements, for example, you could use far fewer unknowns: only a few thousand, not millions. Better yet, with high-order finite elements, you could use only a few hundred. Although the assembly is more expensive, it is still well parallelizable. In each refinement step, you better split long edges before short ones. This may make the mesh more regular and accurate. Because it uses fewer edges, it approximates the true solution better than a naive global refinement.

Here you may ask, for such a small system with just a few thousands of unknowns, why use an iterative method at all? Why not simply use a direct method? Well, we want our code to be as general as possible and solve bigger systems that may arise in future applications. Besides, a direct method is hardly parallelizable.

In summary, for a good numerical scheme, use a good preconditioner with vector extrapolation (or GMRES) on top, even in its original classical version. Finally, Sidi’s code is good not only to solve a large sparse linear system, but also to help diagonalize the Lanczos matrix: find a few minimal eigenvalues and their eigenvectors. In the Hartree-Fock system, this could help uncover the electronic structure at a few minimal energy levels.

Reviewer:  Yair Shapira Review #: CR146959 (2011-0264)
Bookmark and Share
 
Engineering (J.2 ... )
 
 
Mathematics And Statistics (J.2 ... )
 
 
Physical Sciences And Engineering (J.2 )
 
Would you recommend this review?
yes
no
Other reviews under "Engineering": Date
Interactive computer program for the selection of interference fits
Lagodimos A., Scarr A. Computer-Aided Design 16(5): 272-278, 1984. Type: Article
Sep 1 1985
Development of scientific and engineering software for nuclear applications
Trauboth H.  Tools, methods and languages for scientific and engineering computation (, Paris, France,1331984. Type: Proceedings
Sep 1 1985
Computer-aided design optimization of polyphase induction motors
Nagrial M. Computers and Electrical Engineering 11(4): 191-199, 1985. Type: Article
Sep 1 1986
more...

E-Mail This Printer-Friendly
Send Your Comments
Contact Us
Reproduction in whole or in part without permission is prohibited.   Copyright 1999-2024 ThinkLoud®
Terms of Use
| Privacy Policy