Computing Reviews
Today's Issue Hot Topics Search Browse Recommended My Account Log In
Review Help
Search
Parallel and vector computing
Leiss E., McGraw-Hill, Inc., New York, NY, 1995. Type: Book (9780070376922)
Date Reviewed: Sep 1 1996

This exquisite book offers practical, sensible, and rather witty coverage of parallel and vector computers. It does not explain architecture in detail, but contrasts the advantages and shortcomings of various parallel and vector architectures with one another and with conventional uniprocessors. The perspective is that of a practitioner who seeks the truth about performance improvement of real-world algorithms when they are executed on parallel systems.

After a classification of parallelism and peak performance in chapter 2 and a survey of commercial vector and parallel hardware in chapter 3, chapter 4 shows how to program such systems so that the available parallelism yields improved performance. Scalability is emphasized here. The key chapter, chapter 5, introduces Fortran 90 and offers a concise, impressive introduction to dependence analysis. It also contrasts automatic with language-driven parallelism. This is a major theme in the book. Chapters 6 through 8 cover relevant topics for a practical user of parallel systems: reduction operations and recurrence relations, the frequently overlooked management of I/O on parallel computers, and benchmarks. The chapter on limits and benchmarks exposes common PR lies and focuses on what should be measured. Chapter 9 offers a look into the future. The appendices cover interesting selected topics on parallel computing, such as fault tolerance for massively parallel systems, and present a study of the CM-2.

The book is too short; a future revision may correct this. The author should have mentioned Michael J. Wolfe’s Optimizing supercompilers for supercomputers [1] and the significant speed record of the SSD Paragon supercomputer achieved at Sandia National Labs in 1994. Also, trace scheduling should have been discussed as an optimization concept separate from VLIW parallelism. Most important, I missed detail on how Lehmann succeeded in speeding up the seismic application by a factor of 16 on a Cray computer through restructuring.

Everything covered is good and useful, however. The perspective of one who had to struggle to achieve actual speedup on the systems covered is quite refreshing, as is the down-to-earth discussion of performance increase on an N-processor system over a uniprocessor. Happily absent is the wishful expectation that programs would experience an N-fold speedup. Leiss explains in detail why this is generally not possible. He presents in scholarly language, with wit and precision, the hidden, subtle difficulties that lie in efforts to speed up an algorithm on a vector-, pipelined-, distributed-, or shared-memory multiprocessor architecture.

The author discusses, in proper historical context, why FPS and ETA went bankrupt; the book was published too early to comment on the similar sad demise of TMC and KSR. He argues convincingly that vectorization is a mature technology and describes what parallelization needs in order to reach a comparable level of acceptance. The discussion of I/O management is crucial, and the definition of peak performance is quite funny.

Managers considering the purchase of a parallel computer will find this book valuable, as will marketing personnel, who badly need to make more realistic evaluations of their products’ performance capabilities. The work also facilitates an understanding of the performance that can be expected from an algorithm on a particular architecture, and of how much or how little additional performance a vector or parallel computer can deliver. I shall use this text in lectures on high-performance computer architecture.

Reviewer:  Herbert G. Mayer Review #: CR119452 (9609-0639)
1) Wolfe, M. J. Optimizing supercompilers for supercomputers. MIT Press, Cambridge, MA, 1989.
Bookmark and Share
 
Array And Vector Processors (C.1.2 ... )
 
 
Interconnection Architectures (C.1.2 ... )
 
 
Parallel Processors (C.1.2 ... )
 
 
Parallelism And Concurrency (F.1.2 ... )
 
 
Performance of Systems (C.4 )
 
Would you recommend this review?
yes
no
Other reviews under "Array And Vector Processors": Date
Array processing machines: an abstract model
van Leeuwen J., Wiedermann J. BIT 27(1): 25-43, 1987. Type: Article
Mar 1 1988
A unified approach to a class of data movements on an array processor
Flanders P. IEEE Transactions on Computers 31(9): 809-819, 1982. Type: Article
May 1 1985
Gracefully degradable processor arrays
Fortes J., Raghavendra C. IEEE Transactions on Computers 34(11): 1033-1044, 1985. Type: Article
Nov 1 1986
more...

E-Mail This Printer-Friendly
Send Your Comments
Contact Us
Reproduction in whole or in part without permission is prohibited.   Copyright 1999-2024 ThinkLoud®
Terms of Use
| Privacy Policy