Computing Reviews
Today's Issue Hot Topics Search Browse Recommended My Account Log In
Review Help
Search
Parallel scientific computation : a structured approach using BSP and MPI
Bisseling R., Oxford University Press, Oxford, UK, 2004. 324 pp.  Type: Book (9780198529392)
Date Reviewed: Jun 5 2006

During the last decade, much effort has been devoted to developing parallel programming models in order to simplify parallel programming. The bulk synchronous parallel (BSP) programming model tries to reduce the gap between parallel hardware and software. BSP assumes the use of a distributed-memory multiprocessor system, where each processor can access the memory of other processors. BSP algorithms consist of so-called supersteps, where different processors perform computations, followed by a global barrier synchronization. The BSP programming model is implemented through a library containing a reduced set of primitives that can be culled from conventional languages such as C, C++, or Fortran.

In this book, Bisseling introduces, in a friendly style, the BSP programming model and the use of the BSPlib library. Bisseling also discusses the implementation and performance analysis of some well-known algorithms using BSP. The work is intended to be used as a textbook for a course on parallel programming of numerical algorithms for upper-level undergraduates, graduate students, and researchers. No previous experience in parallel programming is required, although some knowledge of a high-level language such as C is needed.

The book is comprised of four chapters, followed by three appendices. Chapter 1 is devoted to a general introduction to BSP, with some consideration given to calculating the computational cost of a BSP algorithm and to providing examples of how to build and run a BSP program. Some bibliographic notes that will help the reader find more information are included.

The remaining chapters are devoted to the parallelization of three different, well-known scientific applications. Performance numbers for real systems, extensive bibliographic notes, and several exercises and programming projects are included. Chapter 2 presents a BSP implementation for the widely known LU decomposition. A sequential implementation and its computational cost are shown before proceeding to the development and refinement of a parallel version. Much effort is also devoted to calculating the cost of the parallel version, an approach that is not very common in books devoted to presenting a parallel programming model. The chapter concludes with some performance figures on a Cray T3E. Chapter 3 studies in detail the fast Fourier transform algorithm, giving the recursive and nonrecursive sequential solutions to the problem and the parallel algorithm. Chapter 4 studies the sparse-vector matrix multiplication problem, paying special attention to the data distribution in order to reduce remote accesses during the parallel execution of the algorithm. Some performance results for a Beowulf cluster are also shown.

The three applications considered are appropriate examples for describing the structured parallel programming model used by BSP. Although it is not the main goal of the book, it would have been useful to have an example on how to program BSP with a more irregular application, where the mapping to the synchronized supersteps programming model is not so clear.

The material covered in the appendices includes some auxiliary functions needed to run the examples of the book, a quick reference guide to BSPlib, and an interesting appendix that shows how to develop structured parallel programming using the message passing interface (MPI) instead of BSPlib.

As stated above, the cost analysis of the solutions is an important part of this volume, although it is not necessary to have an in-depth understanding of the details to learn how to develop parallel versions of sequential, scientific algorithms. The book is carefully written and edited. It is an excellent starting point for learning how to write well-structured, parallel scientific programs.

Reviewer:  Diego R. Llanos Review #: CR132873 (0704-0319)
Bookmark and Share
 
Parallel Programming (D.1.3 ... )
 
 
Parallel Architectures (C.1.4 )
 
Would you recommend this review?
yes
no
Other reviews under "Parallel Programming": Date
Design and implementation of Java bindings in Open MPI
Vega-Gisbert O., Roman J., Squyres J.  Parallel Computing 59(C): 1-20, 2016. Type: Article
Jun 14 2017
High performance parallelism pearls: volume two
Jeffers J., Reinders J.,  Morgan Kaufmann Publishers Inc., San Francisco, CA, 2015. 592 pp. Type: Book (978-0-128038-19-2)
Sep 16 2016
Writing a performance-portable matrix multiplication
Fabeiro J., Andrade D., Fraguela B.  Parallel Computing 52(C): 65-77, 2016. Type: Article
May 26 2016
more...

E-Mail This Printer-Friendly
Send Your Comments
Contact Us
Reproduction in whole or in part without permission is prohibited.   Copyright © 2000-2017 ThinkLoud, Inc.
Terms of Use
| Privacy Policy