Computing Reviews
Today's Issue Hot Topics Search Browse Recommended My Account Log In
Review Help
Search
Impact of reordering on the memory of a multifrontal solver
Guermouche A., L’Excellent J., Utard G. Parallel Computing29 (9):1191-1218,2003.Type:Article
Date Reviewed: Jan 23 2004

Guermouche, L’Excellent, and Utard discuss the solution of sparse linear algebraic systems using a multifrontal (direct) solver. Such methods are known for their large memory requirements relative to iterative methods, but they are robust and efficient. This paper discusses various ordering techniques and compares their memory requirements. The following methods are compared: approximate minimum degree (AMD); multiple minimum degree (MMD), the approximate minimum fill, as implemented in a multifrontal massively parallel sparse direct solver (MUMPS); PORD, a hybrid bottom-up and top-down sparse reordering; METIS, a hybrid multilevel nested dissection and multiple minimum degree; and SCOTCH, a modified version of the algorithm.

The pure nested dissection was not competitive with these and was abandoned. It was shown that reordering impacts the width and depth of the tree. METIS and SCOTCH generate wide well-balanced trees. The others generate deep trees, with AMD having a better-balanced one. PORD and AMF use the smallest stack size.

The authors developed an algorithm for optimal tree reordering to minimize stack memory requirements. This algorithm improves the requirements of METIS and SCOTCH. Of course, for AMD, one cannot expect any gain since the tree is already deep and balanced. Numerical experiments show that deep, unbalanced trees (AMF and PORD) require less memory than wide, well-balanced ones (METIS and SCOTCH).

The solution on massively parallel machines, using three variants of MUMPS, was also discussed. The authors conclude, based on the numerical experiments, that the current scheduling strategy in MUMPS can be improved upon.

Reviewer:  Beny Neta Review #: CR128983 (0406-0689)
Bookmark and Share
  Featured Reviewer  
 
Distributed Memories (D.4.2 ... )
 
 
Multigrid And Multilevel Methods (G.1.8 ... )
 
 
Sparse, Structured, And Very Large Systems (Direct And Iterative Methods) (G.1.3 ... )
 
 
Numerical Linear Algebra (G.1.3 )
 
 
Partial Differential Equations (G.1.8 )
 
Would you recommend this review?
yes
no
Other reviews under "Distributed Memories": Date
Techniques for reducing consistency-related communication in distributed shared-memory systems
Carter J., Bennett J., Zwaenepoel W. ACM Transactions on Computer Systems 13(3): 205-243, 1995. Type: Article
Oct 1 1996
A classification of wait-free loop agreement tasks
Herlihy M., Rajsbaum S. Theoretical Computer Science 291(1): 55-77, 2003. Type: Article
Aug 5 2003
Recoverable Distributed Shared Virtual Memory
Wu K., Fuchs W. IEEE Transactions on Computers 39(4): 460-469, 1990. Type: Article
Jun 1 1991
more...

E-Mail This Printer-Friendly
Send Your Comments
Contact Us
Reproduction in whole or in part without permission is prohibited.   Copyright 1999-2024 ThinkLoud®
Terms of Use
| Privacy Policy