Parallel computing has become mainstream in the last few years. Parallel programming is no longer optional, from smartphone and tablet apps to web applications and scientific computing. Hence, it comes as no surprise that the ACM and the IEEE Computer Society have jointly released new guidelines for undergraduate degree programs , also known as CS2013, that incorporate a new knowledge area for parallel and distributed computing. (The CS2013 report organizes all of computer science (CS) around 18 knowledge areas.)
A wide range of books can be used for learning parallel programming. Some provide just the foundations and are palatable to nontechnical managers. Others delve into particular technologies, such as message passing standards or socket programming, and can be used as annotated reference guides. A third bunch teaches fundamental techniques for the design of parallel algorithms, and a fourth category just describes the most common parallel algorithms that are typically used in some particular application domain. Deng’s book, despite its short length, includes a bit of each, which might be a pro or a con depending on what you expect from the book. More than a textbook, we could say that Deng has published his annotated course notes, based on more than 20 years of teaching the subject at Stony Brook University.
The first quarter of the book includes the typical introductory chapters on parallel computing, performance metrics, hardware systems, and development software. Moore’s law is illustrated using commercial microprocessors, and the typical architecture of TOP500 supercomputers (http://www.top500.org/) is described. In the chapter on hardware systems, it is shocking that the author still talks about the 45 nanometer (nm) technology from 2008, given that the 22nm technology has been used in memory products since that year, was announced with fanfare by Intel in 2011, and has been included in commercial microprocessors since 2012.
A 20-page chapter on the design of algorithms bridges the introductory chapters to the rest of the book. This is the most informative chapter for those who are new to parallel programming. It describes parallel programming models and provides some examples of collective operations such as broadcast or gather and scatter. It also includes an interesting discussion on mapping tasks to processors in economic terms.
The second half of the book is organized around some specific areas that constitute the key building blocks for the scientific and engineering applications of parallel computing. Fundamental parallel algorithms are concisely described for linear algebra, differential equations, Fourier transforms, and mathematical optimization. A final uneven chapter on applications in physics and engineering closes this part on parallel scientific computing.
However, the book does not end there. One third of the book is devoted to its appendices. The first two appendices include the lab notes you would typically provide to students on two of the most popular parallel programing standards, message passing interface (MPI) and OpenMP for shared memory parallel programming. The third appendix lists 26 interesting class projects students could do in an undergraduate course on parallel computing. Finally, the fourth appendix includes the code listings for three Fortran programs that use MPI. Fortran, which was originally developed at IBM in the 1950s, is still widely used for scientific applications, a fact current CS students find mind-boggling.
As you can infer from the previous paragraphs, this book touches on many topics. Given its length, you cannot expect an in-depth treatment of any one of them. Jumps from topic to topic are not always presented in advance, which feels like watching short TV commercials, and one too many issues are left just as a burst of telegraphic bullet points, something you expect from slide presentations but not from a textbook.
Despite the aforementioned limitations, a portion of the book’s intended audience might still find it relevant and useful as a short introduction to parallel computing. Its concise descriptions address many important problems, especially for those who lack a solid background in CS and do not have the time to delve into more detailed monographs on parallel algorithms, numerical methods, and scientific computing (something they should intend to do in the future if they are really interested in this wonderful subject).
More reviews about this item: Amazon