Computing Reviews
Today's Issue Hot Topics Search Browse Recommended My Account Log In
Review Help
Search
Applied parallel computing
Deng Y., World Scientific Publishing Co, Inc., Singapore, 2013. 207 pp. Type: Book
Date Reviewed: Feb 27 2014

Parallel computing has become mainstream in the last few years. Parallel programming is no longer optional, from smartphone and tablet apps to web applications and scientific computing. Hence, it comes as no surprise that the ACM and the IEEE Computer Society have jointly released new guidelines for undergraduate degree programs [1], also known as CS2013, that incorporate a new knowledge area for parallel and distributed computing. (The CS2013 report organizes all of computer science (CS) around 18 knowledge areas.)

A wide range of books can be used for learning parallel programming. Some provide just the foundations and are palatable to nontechnical managers. Others delve into particular technologies, such as message passing standards or socket programming, and can be used as annotated reference guides. A third bunch teaches fundamental techniques for the design of parallel algorithms, and a fourth category just describes the most common parallel algorithms that are typically used in some particular application domain. Deng’s book, despite its short length, includes a bit of each, which might be a pro or a con depending on what you expect from the book. More than a textbook, we could say that Deng has published his annotated course notes, based on more than 20 years of teaching the subject at Stony Brook University.

The first quarter of the book includes the typical introductory chapters on parallel computing, performance metrics, hardware systems, and development software. Moore’s law is illustrated using commercial microprocessors, and the typical architecture of TOP500 supercomputers (http://www.top500.org/) is described. In the chapter on hardware systems, it is shocking that the author still talks about the 45 nanometer (nm) technology from 2008, given that the 22nm technology has been used in memory products since that year, was announced with fanfare by Intel in 2011, and has been included in commercial microprocessors since 2012.

A 20-page chapter on the design of algorithms bridges the introductory chapters to the rest of the book. This is the most informative chapter for those who are new to parallel programming. It describes parallel programming models and provides some examples of collective operations such as broadcast or gather and scatter. It also includes an interesting discussion on mapping tasks to processors in economic terms.

The second half of the book is organized around some specific areas that constitute the key building blocks for the scientific and engineering applications of parallel computing. Fundamental parallel algorithms are concisely described for linear algebra, differential equations, Fourier transforms, and mathematical optimization. A final uneven chapter on applications in physics and engineering closes this part on parallel scientific computing.

However, the book does not end there. One third of the book is devoted to its appendices. The first two appendices include the lab notes you would typically provide to students on two of the most popular parallel programing standards, message passing interface (MPI) and OpenMP for shared memory parallel programming. The third appendix lists 26 interesting class projects students could do in an undergraduate course on parallel computing. Finally, the fourth appendix includes the code listings for three Fortran programs that use MPI. Fortran, which was originally developed at IBM in the 1950s, is still widely used for scientific applications, a fact current CS students find mind-boggling.

As you can infer from the previous paragraphs, this book touches on many topics. Given its length, you cannot expect an in-depth treatment of any one of them. Jumps from topic to topic are not always presented in advance, which feels like watching short TV commercials, and one too many issues are left just as a burst of telegraphic bullet points, something you expect from slide presentations but not from a textbook.

Despite the aforementioned limitations, a portion of the book’s intended audience might still find it relevant and useful as a short introduction to parallel computing. Its concise descriptions address many important problems, especially for those who lack a solid background in CS and do not have the time to delve into more detailed monographs on parallel algorithms, numerical methods, and scientific computing (something they should intend to do in the future if they are really interested in this wonderful subject).

More reviews about this item: Amazon

Reviewer:  Fernando Berzal Review #: CR142046 (1406-0393)
1) The ACM and IEEE Computer Society Joint Task Force on Computing Curricula. Computer Science Curricula 2013: Curriculum Guidelines for Undergraduate Degree Programs in Computer Science (Dec. 20, 2013). http://www.acm.org/education/CS2013-final-report.pdf.
Bookmark and Share
  Reviewer Selected
Featured Reviewer
 
 
Parallel Programming (D.1.3 ... )
 
 
Parallelism And Concurrency (F.1.2 ... )
 
Would you recommend this review?
yes
no
Other reviews under "Parallel Programming": Date
How to write parallel programs: a first course
Carriero N. (ed), Gelernter D. (ed), MIT Press, Cambridge, MA, 1990. Type: Book (9780262031714)
Jul 1 1992
Parallel computer systems
Koskela R., Simmons M., ACM Press, New York, NY, 1990. Type: Book (9780201509373)
May 1 1992
Parallel functional languages and compilers
Szymanski B. (ed), ACM Press, New York, NY, 1991. Type: Book (9780201522433)
Sep 1 1993
more...

E-Mail This Printer-Friendly
Send Your Comments
Contact Us
Reproduction in whole or in part without permission is prohibited.   Copyright 2004 Reviews.com™
Terms of Use
| Privacy Policy