Computing Reviews
Today's Issue Hot Topics Search Browse Recommended My Account Log In
Review Help
Search
Concurrent programming
Andrews G., Benjamin-Cummings Publ. Co., Inc., Redwood City, CA, 1991. Type: Book (9780805300864)
Date Reviewed: Jun 1 1994
Comparative Review

In the quest for increased performance, computer architects and software developers alike have embraced parallel computing, especially in recent years. Indeed, parallel computing is an essential component of both the abandoned fifth-generation and current sixth-generation Japanese computer projects.

Generally, advancements in computer software tend to lag behind those in computer hardware. This fact is especially true of parallel computing, where the development of parallel software (languages, compilers, and application programs) is usually quite machine-specific.

Parallel computer hardware ranges from vector supercomputers, through fine-grained single instruction/multiple data (SIMD) array processors, to coarse-grained multiprocessors. Parallel software falls into one of two broad categories: conventional sequential languages with vector/parallel extensions, and new languages with inherent, built-in concurrency support.

Until recently, three of the more notable books on parallel computing were Hwang and Briggs [1], Perrott [2], and Quinn [3]. Since 1990, the number of books published in this field has increased dramatically. The purpose of this review is to provide a critical comparison of recent introductory books on parallel programming. I will not consider specialized books on the programming of specific vector supercomputers, nor on the mathematical foundations (theory) behind parallel computing. This review also will not address edited books of papers presented at parallel computing conferences.

Almasi and Gottlieb

The authors concentrate on highly parallel processing, which reflects their backgrounds at both IBM and New York University. The book is organized into three parts: “Foundations,” “Parallel Software,” and “Parallel Architectures.” Part 1 comprises four chapters. The overview of the field presented in chapter 1 emphasizes that the driving forces for parallel processing stem from both hardware and software. The authors also argue for a repeal of Grosch’s law, which asserts that the best performance for the price will be achieved by using the best uniprocessor. Chapter 2, “Sample Applications,” provides the reader with a feel for real-world parallel computing by discussing typical applications that can benefit from the application of parallel processing techniques. These applications are taken from two areas: science and engineering (numerical computations involving fluid dynamics, weather modeling, aircraft design, and the like) and symbolic processing (databases, AI, and expert systems). Technological constraints and opportunities are covered in chapter 3, especially VLSI. Here the authors rightly point out that “technology alone is not a panacea: wiring, packaging, and power dissipation still exert strong influences on a feasible design.” Chapter 4 deals with computational models (starting with Flynn’s classifications--SISD, SIMD, MISD, and MIMD--and then proceeding to a more theoretical treatment) and selected algorithms.

Part 2, consisting of chapters 5 through 7, examines the software requirements of parallel processing. The first half of chapter 5 discusses the essential constructs needed for parallel execution, while the second half discusses parallel programming languages and environments, which are grouped in terms of control mechanism into the following categories: control driven, pattern driven, data driven (see Table 1 for examples of these three), and demand driven (reduction languages, functional programming, lambda calculus, and the Church-Rosser theorem). Compilers and interpreters are covered in chapter 6, and an example of a parallelizing compiler is included. Chapter 7 discusses how operating systems deal with concurrency, and places it in a historical context by briefly discussing a few practical systems.

Part 3 deals with parallel architectures, which are organized into three main groups: SIMD (single instruction, multiple data), MIMD (multiple instruction, multiple data), and hybrids of these two. Chapter 8 deals with interconnection networks. SIMD machines are covered in chapter 9, with the most attention paid to pipelined vector processors. Representative example architectures (see Table 1) are included, and systolic arrays are also briefly mentioned. Chapter 10 covers MIMD parallel machines, both shared-memory and message-passing. The chapter describes the machines listed in Table 1, including the NYU Ultracomputer (not surprisingly, since Gottlieb is an associate director of this project at New York University). Parallel architectures that cannot be easily classified as either SIMD or MIMD, such as VLIW (very long instruction word), are covered in the last  chapter. 

The book concludes with an extensive bibliography of 353 entries. While it is comprehensive in scope, it is written in a style that makes it accessible to the novice reader in the field. It is the oldest of the eight books in this review, yet it remains current in its coverage of most topics.

Andrews

The ten chapters in this book are organized into four parts:

  • Part 1, Basic Concepts

    • (1) Sequential Programming

    • (2) Concurrency and Synchronization

  • Part 2, Shared Variables

    • (3) Fine-grained Synchronization

    • (4) Semaphores

    • (5) Conditional Critical Regions

    • (6) Monitors

  • Part 3, Message Passing

    • (7) Asynchronous Message Passing

    • (8) Synchronous Message Passing

    • (9) RPC [Remote Procedure Call] and Rendezvous

  • Part 4, Practice

    • (10) Language Overviews

The “basic concepts” of Part 1 include useful Pascal-like programming notations and logics for sequential and concurrent programming. A systematic method for solving synchronization problems is presented in Part 2, including the application of shared variables, semaphores, and monitors to real-world problems. Part 3 extends this method to distributed systems, which utilize message-passing communication and synchronization. In Part 4, Andrews introduces five concurrent programming languages, and includes a complete program in each language in order to bring home to the reader the concepts introduced in earlier chapters. These example programs are a strong feature of this book: a file difference checker in Turing Plus; a prime number sieve in Occam; the dining philosophers problem in Ada; the traveling salesperson problem in SR; and prime numbers and replicated workers in Linda.

The book concludes with a glossary and extensive 24-page bibliography. Each chapter ends with an excellent “Historical Notes and References” section and a set of exercises, although no solutions are given. Andrews emphasizes theoretical foundations throughout; indeed, the book begins with a discussion of language notation, predicate logic, proofs of correctness, and program derivation. It is somewhat a relief to reach chapter 10 and see the theoretical concepts presented earlier in the book come together within the context of real-world applications. The coverage in this area is excellent. Andrews begins each example by discussing the program structure and explaining the elements necessary to solve the problem at hand. All five examples end with a program listing in the language of interest. The chapter concludes with a comparison of the languages just covered, and a discussion of their performance characteristics. Chapter 10 is perhaps the strongest feature of this book, but some novice readers could find the theoretical emphasis somewhat daunting.

Ben-Ari

Part 1--chapters 1 through 6--deals with concurrent programming in common memory architectures. The abstraction used throughout is “interleaved execution sequences of atomic abstractions.” This concept is described in chapter 2, together with the concept of “correctness.” Algorithms based on this model are developed in chapter 3 in order to solve the mutual exclusion problem. This chapter is also important because it discusses possible pathological behaviors of concurrent programs, as well as verification techniques that can be used to prove correctness. The next two chapters cover the classic concurrent primitives “semaphore” and “monitor,” mainly in the context of the producer/consumer problem. These primitives are further discussed in the context of the dining philosophers problem in chapter 6.

Part 2 is devoted to message-passing distributed systems. Models of systems in which processes execute on physically separate processors are introduced in chapter 7. The next three chapters deal with Ada, Occam, and Linda. All three languages are intended for distributed systems, but support concurrency using radically different primitives. Three algorithms suited to implementation on distributed systems are covered in chapters 11 through 13. They are mutual exclusion, termination (the ability to recognize global states), and the Byzantine generals problem (used to illustrate fault tolerance considerations in distributed systems).

Part 3 covers the implementation of concurrent programs. Chapter 14 discusses the implementation of concurrent programs on a uniprocessor. An overview of topics more usually found in operating systems texts is included here. The implementation of Ada, Occam, and Linda on multiprocessor architectures is discussed in chapter 15. Chapter 16 deals with concurrent programming in real-time systems.

Appendix A is a quick tour of the Ada language. Appendix B expands the fragmentary programs discussed in the text into executable versions. Appendix C presents Ada emulations of the Occam and Linda concurrent programming primitives. Finally, an implementation of the distributed algorithms of Part 3 is included in Appendix D.

Each chapter concludes with a “Further Reading” section, in which the relevant references from the 56 listed in the bibliography are cited. Moreover, in Part 1 and chapters 11 through 13, end-of-chapter exercises are included. Each algorithm is clearly presented, and the book is easy to follow throughout. This book is an excellent introduction to concurrent programming, and is written with what I consider to be just the right mixture of theoretical foundation and placement within an appropriate framework. It is the perfect length for an introductory book.

Burns and Davies

Ten chapters and two appendices make up this book. Chapter 1 begins by introducing the basic nature of concurrency, and ends by describing the Pascal-FC language, a functionally concurrent extension of Pascal devised by the authors as a teaching tool to accompany the book. Chapter 2 covers the representation, structure, granularity, and behavior of processes. Chapter 3 addresses interprocess synchronization and communication. The authors point out that process interaction falls into three classes: independence, competition, and cooperation. The classic multiple update, readers-and-writers, and dining philosophers problems are used to illustrate the difficulties associated with competition and cooperation, and to demonstrate the need for more sophisticated concurrency primitives. One such primitive, message passing, is covered in the following two chapters. Chapter4 describes the rendezvous model of Occam, while chapter 5 discusses the remote invocation (extended rendezvous) model of Ada. Semaphores are the focus of chapter 6. Burns and Davies make the point that while semaphores are a powerful low-level construct, they fail to provide adequate high-level abstraction capable of genuinely supporting concurrent programming.

The authors discuss mutual exclusion in chapter 7, and pay particular attention to critical regions (monitors).  After  reviewing semaphores, monitors, synchronous message passing, and remote invocation, Burns and Davies introduce a unified set of language features in chapter 8, first from the perspective of Pascal-FC, then with reference to Ada 9X language proposals. The authors’ unified language model incorporates remote invocation for direct interprocess communication and a passive entity (resource) that provides mutual exclusion. A requeue facility is also incorporated that facilitates the implementation of certain parallel algorithms. Examples using this unified model follow in chapter 9; specific programming examples are taken from embedded systems, discrete event simulation, dataflow models, Linda-type structures, and process farms.

The book concludes with chapter 10, a postscript in which the key observations from the preceding chapters are summarized: semaphores and condition variables are too low-level for general programming; message passing methods (incorporating guards) represent a more abstract and unified means of programming communication; remote invocation is an effective high-level message-passing abstraction; and monitor-like structures are an effective way of encapsulating shared resources. The two appendices address Pascal-FC: the first constitutes a language reference manual, and the second discusses language implementation issues.

This excellent introductory book is eminently accessible to the novice reader, and benefits from the efforts of earlier books in this field.

Lawson

This book is organized into three parts: “Fundamental Issues,” “Selected Paradigms,” and “Case Studies.” The fundamental issues of Part 1 are identified as processes, means of communication, and method of synchronization. These issues are introduced with the aid of the Behavior Mapping Resource model of parallel processing systems. Chapter 2 summarizes the general properties of real-time systems, while parallel processing is introduced extensively in chapter3. Application domains are treated in chapter 4, with specific attention devoted to digital signal processing (namely, pipelines, loop unfolding, systolic arrays, and wavefronts), image processing (two-dimensional fast Fourier transform), and graphics processing. Chapter 5 deals with the necessity for dependability and responsiveness in industrial real-time systems. Fault tolerance, redundancy, error recovery, and hardware and software verification are included in this chapter. Part 1 concludes with a chapter that sets parallel processing in a historical context.

Part 2 views processes in terms of granularity, means of communication, and method of synchronization. Processing models are covered in chapter 7. Occam and Ada are used to illustrate CSP, guarded commands, Petri nets, and hardware description languages such as VHDL. Chapter 8 covers direct-mapped architectures, and includes a discussion of scheduling and resource allocation. A short chapter on VLIW architectures follows. Hardware considerations are taken further in the remaining two chapters: chapter 10 deals with processing element array architectures, and chapter 11 addresses multiprocessor and multicomputer architectures. Part 2 ends with a short chapter on dataflow architecture.

The strength of Lawson’s work is Part 3, in which three case studies are described at length. Chapter 13 discusses a signal processor component of the Giraffe radar system, developed by Ericsson. A single-chip linear array picture processor--Integrated Vision Products’s LAPP 1100--is described in chapter 14. The Ericsson SDS80 MIMD avionics system is presented in chapter 15. Chapter 16 covers several research and development projects.

Each chapter concludes with a list of relevant references. A “Literature Review” appears at the end of the book, and includes professional publications from IEE, IEEE, ACM, and others; conferences, workshops, and symposia; theoretical contributions; and books, survey articles, case studies, and special issues of journals. Lawson’s annotations accompany each reference cited in the literature review. The book concludes with a glossary. One irritating feature is its inadequate index; it needs to be expanded considerably.

Table 1: Objective Data
MachinesLanguagesExamplesEmphasis
Almasi and GottliebILLIAC IV, IBM GF11, Connection Machine, Manchester Dataflow, NYU Ultracomputer, BBN Butterfly, IBM RP3FORTRAN, LISP, C, Concurrent Pascal, CSP, Occam, Prolog, Val, Id, LAU, SISALNumerous (good grounding in practical problems)Highly parallel programming
AndrewsN/ATuring Plus, Occam, Ada, SR, LindaSeveral (plus 5 large programs)Theoretical, formal, algorithms
Ben-AriTransputer, hypercubesAda, Occam, LindaMutual exclusion, unbounded buffer, producer-consumer, dining philosophers, matrix multiplication (Occam, Linda)Software, algorithms
Burns and DaviesN/APascal-FC, Occam, AdaNumerous (in Pascal-FC)Software, primitives
LawsonILLIAC IV, DAP, Staran, MPP, Lucas, CM2, MasParAda, Occam, C FORTRAN, LISP, Parallel PascalFew, apart from 3 case studiesreal-time systems
Lewis and El-RewiniiPSC, nCube, CM, BBN ButterflyOccam, C++, SS/1, C*, StrandFFT (Occam), computing pi (hypercube), convolution in signal processing (Strand), cloud cover from satellite images, task grapher, Slalom benchmark (C), Gaussian elimination (C)Theoretical/ scientific (numerical programming)
SnowN/AConcurrent Pascal, Occam, Concurrent Euclid, Mesa, Path Pascal, Ada, Pascal-mFewDistributed computing (operating systems)
WilliamsCM, DAP, MPPPascal, Simula, Modula 2, DAP, FORTRAN, Actus, CMLisp, Linda, Concurrent Pascal, Occam, LOTOSFewTheoretical (models)

Lewis and El-Rewini

Twelve chapters and three appendices compose this book:

  • What Is Parallel Computing?

  • Measures of Performance

  • Parallel Processors

  • Shared-memory Parallel Programming

  • Distributed-memory Parallel Programming

  • Object-oriented Parallel Programming

  • Data Parallel Programming

  • Functional Dataflow Programming

  • Scheduling Parallel Programs

  • Loop Scheduling

  • Parallelizing Serial Programs

  • Parallel Programming Support Environments

  • SLALOM Benchmark

  • Data Parallel C Implementation of Gaussian Elimination

  • PPSE Solution to &pgr; Calculation

The authors state their particular orientation in the Preface, namely that the essence of parallel computing is hardware plus software plus applications.

Chapter 1 introduces parallel computing and discusses the various paradigms by way of a banking analogy. Chapter2 discusses fundamental limitations of parallel computing, as a prelude to performance measurement and benchmarks. Chapter 3 covers parallel hardware in terms of its interconnection network topology. This classification first discriminates between distributed and shared memory architectures, before further refinement into hypercube, 3D mesh, bus, or multiport memory.

Chapter 4 covers shared-memory parallel programming, and includes discussions of multitasking, critical sections, deadlocks, and spin locks. Distributed memory parallel programming is covered in the following chapter. Communicating sequential processes are introduced, leading to an introduction of Occam. An example then implements fast Fourier transforms in Occam. The coverage then switches to programming hypercubes. A brief discussion of a computer vision example follows, and a more extensive example involves the calculation of &pgr; on the Intel iPSC/2 and nCube. Chapter 6 introduces object-oriented parallel programming. The chapter first summarizes the basic concepts of OOP (abstract data types, classes, inheritance, objects, and hierarchy) and then discusses parallel programming in C++.

Chapter 7 deals with data parallel programming. The Connection Machine and C* illustrate the basic concepts. The chapter emphasizes the design of appropriate data parallel algorithms and presents four applications: computer graphics line drawing, VLSI cell placement (simulated annealing), finite difference problems, and computer vision. Chapter 8 covers functional dataflow programming. It introduces and/or graphs, then presents the Strand language, with convolution as an application. Scheduling is the concern of the next two chapters. Chapter 9 discusses various scheduling taxonomies. Static scheduling, optimal scheduling, and scheduling heuristics are illustrated by way of an example--the determination of cloud cover from satellite data. The chapter concludes with an introduction to Task Grapher, a practical scheduling tool. A detailed coverage of loop scheduling follows in chapter 10.

Parallelizing serial programs is covered in chapter 11, again in a formal manner. The chapter discusses definitions of and tests for data dependence, prior to explaining parallelization optimization techniques (loop fusion, coalescing, distribution, interchange, shrinking, unrolling and skewing, and node splitting). The book concludes with a chapter on parallel programming support environments (CASE tools). Examples include Aspar/Express, Schedule/Build, PICL/ParaGraph, Linda, Code/Rope, EPA, and pRETS.Problems for discussion and solution and a list of references are at the end of each chapter. This book is more comprehensive than some of the other books included in this review. Nevertheless, it remains accessible to the novice reader.

Snow

The origin of this book is an undergraduate operating systems course. The author assumes readers are proficient in programming in a sequential high-level language such as Pascal. Seven chapters make up the book: “Introduction to Concurrency,” “Processes and the Specification of Concurrency,” “Communication Between Processes,” “High-level Concurrency Constructs: Shared Data,” “High-level Concurrency Constructs: Message Passing,” “Languages for Concurrency,” and “Implementation of a Concurrency Kernel.” Each chapter includes exercises, and the book concludes with a modest yet excellent annotated bibliography of 30 references. The introductory chapter makes a distinction between real and pseudo-concurrency (time division multiplexing). Chapter 2 presents alternate specifications of concurrency. Snow discusses concurrency from the perspective of three different levels of abstraction: the statement level, the procedure level, and the program level. Chapter 3 deals with communication between processes, and introduces shared variables, critical sections, interlocking, mutual exclusion, and semaphores.

The next two chapters are devoted to high-level concurrency constructs: shared memory in chapter 4 and message passing in chapter 5. The chapter on “Languages for Concurrency” reveals a bias (not surprisingly) toward Pascal derivatives, especially Pascal-m and Occam. See Table 1 for the languages mentioned in this chapter. Snow concludes with an implementation of a concurrency kernel for an MC68000 platform, and assumes that a hypothetical Pascal machine is available on the MC68000. In providing this case study, the author nicely brings together the concurrency constructs introduced in chapters 4 and 5. Snow ends chapter 7 by pointing out how the kernel would differ if it were implemented in both Ada and Occam.

Williams

The value of this book is that while it is theoretically oriented, it remains accessible to the novice reader. Clear, concise programming examples illustrate basic concepts, constituting a good introduction to the field. The book is divided into two parts: “Categories of Parallel Programming” (chapters 2 to 8) and “Transformation Techniques” (chapters 9 to 11). In chapter 1, Williams introduces programming models for parallel processing using a quaint (and effective) “kitchen sink drama.” This example is followed by Flynn’s classifications.

Part 1 begins with sequential processing. Pascal is used as a point of reference for introducing coroutines, Simula, and Modula-2. Chapter 3 deals with array processing (SIMD). After introducing general aspects of array processing, this chapter describes three illustrative array architectures (see Table 1). A brief description of languages suitable for array programming (DAP-FORTRAN, Actus, and CMLisp) follows. Chapter 4, on pipeline processing, covers pipe priming, arithmetic pipelines, instruction pipelining, and systolic arrays. Chapter 5 is devoted to shared memory processing, where parallel language constructs such as fork (join), cobegin (end),semaphores, and monitors are covered. An example written in Concurrent Pascal illustrates these constructs. Message passing and communicating sequential processes are treated in chapter 6. Chapter 7 discusses other programming models, specifically functional, logic, and object-oriented.

Part 2 is the book’s real strength; it is an excellent introduction to the topic. Chapter 9 discusses detecting parallelism. Williams explains how examination of tree representations of programs as well as studying data usage and flow can reveal parallelism. Chapter 10 explains how the parallelism model can be altered. Chapter 11 is “Summary, Conclusions, and the Future.” A bibliography of 60 references completes the book.

Two notable features of this book are the boxed introductions at the start of each chapter and the end-of-chapter reviews.

Comparison

My preferences from these eight introductory books on parallel computing are Ben-Ari and Burns and Davies. Both are written at a level that is easily accessible for the novice reader, yet they cover all the basics of parallel computing. Both include good coverage of classical problems such as mutual exclusion, dining philosophers, unbounded buffer, producer-consumer, and monitors. Ben-Ari further discusses matrix multiplication (in both Occam and Linda) and the Byzantine generals problem. Burns and Davies discuss the readers and writers problem, as well as Pascal-FC implementations of both a simple alarm clock and the simulation of semaphores using monitors. Implementation issues are considered in the last section of Ben-Ari (namely single processors, multiprocessors, and real-time programming). While Burns and Davies do not cover hardware implementations, they make up for this deficiency by providing (at nominal cost from the University of Bradford, UK) a portable Pascal-PC compiler for PCs (written in Pascal).

Having stated my preferences, I would not hesitate to recommend any of these books as a starting point for studying parallel computing. More significant are the numerous books that have been deliberately omitted from this comparative review. This leads me to conclude on a cautionary note: do not be misled by the title of any book purporting to discuss parallel computing; examine it closely to discover its actual contents before you purchase it.

Reviewer:  John Fulcher Review #: CR124342
1) Hwang, K. and Briggs, F. A. Computer architecture and parallel processing. McGraw-Hill, New York, 1984.
2) Perrott, R. H. Parallel programming. Addison-Wesley, Reading, MA, 1987.
3) Quinn, M. J. Designing efficient algorithms for parallel computers. McGraw-Hill, New York, 1987.
Comparative Review
This review compares the following items:
  • Concurrent programming:
  • Concurrent programming:
  • Introduction to parallel computing:
  • Parallel processing in industrial real-time applications:
  • Highly parallel computing:
  • Concurrent programming:
  • Principles of concurrent and distributed programming:
  • Programming models for parallel systems:
  • Bookmark and Share
      Featured Reviewer  
     
    Multiprocessing/ Multiprogramming/ Multitasking (D.4.1 ... )
     
     
    Parallelism And Concurrency (F.1.2 ... )
     
     
    Real-Time And Embedded Systems (C.3 ... )
     
     
    Concurrent Programming (D.1.3 )
     
     
    Concurrency (D.4.1 ... )
     
     
    Concurrent C (D.3.2 ... )
     
      more  
    Would you recommend this review?
    yes
    no
    Other reviews under "Concurrent Programming": Date

    Type: Journal
    Jul 1 1985
    Resources in parallel and concurrent systems
    , ACM Press, New York, NY, 1991. Type: Book (9780897914000)
    Jun 1 1992
    On parallel languages
    Kotov V., Springer-Verlag, London, UK, 1984. Type: Book (9789780387136578)
    Jul 1 1985
    more...

    E-Mail This Printer-Friendly
    Send Your Comments
    Contact Us
    Reproduction in whole or in part without permission is prohibited.   Copyright 1999-2024 ThinkLoud®
    Terms of Use
    | Privacy Policy