Computing Reviews
Today's Issue Hot Topics Search Browse Recommended My Account Log In
Review Help
Search
Parallel functional languages and compilers
Szymanski B. (ed), ACM Press, New York, NY, 1991. Type: Book (9780201522433)
Date Reviewed: Sep 1 1993

One of the claims made for functional programming is that the lack of side effects makes these languages ideal for writing programs to be run on parallel architectures. Functional programs written for sequential machines can be run on multiprocessor machines with few modifications; the programming system will exploit parallelism inherent in the program. Thus, the same program can be run on many different systems, without the complexities normally associated with parallel programs. This book is a collection of papers exploring this premise. Each paper describes a different language and is written by the developer of that language. The description includes a suggested implementation for parallel machines. A unifying thread through the papers is the use of LU decomposition of matrices as an example. An introductory chapter by Szymanski introduces the goal of the book, and a concluding chapter by all the authors contrasts the approaches taken.

The first language, described by Ashcroft, Faustini, and Jagannathan, is Lucid. Lucid differs from other functional languages in that it has an implicit state that changes over time and location (it is “intensional”). Each variable can be viewed as a function over this state. An implementation technique called “eduction,” in which concurrency is derived implicitly from the program, is described.

The second language, described by Szymanski, is an equational language called EPL (Equational Programming Language) meant for use in scientific programming. The main data type is an irregular array, rather than lists, as in most functional programming languages. Equations may be annotated with processor expressions to specify explicit concurrency, but implicit concurrency is also available.

The third language, described by Skedzielewski, is Sisal. Sisal superficially looks like an imperative language, but the rule that each variable can only be assigned to once (thus allowing assignments to be viewed as definitions) removes side effects. Concurrency is implicit, as in Lucid.

The fourth paper, by Hudak, describes an annotated variant of the functional language Haskell. The annotations do not change the semantics of the language, but are used to tell the compiler how to break the program up for concurrent execution. These annotations could be used with any similar language.

The fifth paper, by Ekanadham, describes Id, a language designed for dataflow machines for which concurrency can be found implicitly. The sixth paper, by Chen, Choo, and Li, describes Crystal. The Crystal compiler spends a lot of effort mapping programs onto the target machine, but it has constructs for incorporating domain-specific knowledge to direct the compiler in this mapping.

The last paper, by Sarkar, describes PTRAN, a parallelizing FORTRAN system. Its relevance to the topic of the book is questionable. It is related to the rest of the papers in that a language without explicit concurrency constructs is presented and some of the same techniques for identifying concurrency are used. Perhaps the chapter is meant to demonstrate the extra work involved in identifying and removing extraneously specified sequencing in imperative programs.

A large portion of the book is necessarily spent giving a summary of each of the seven languages surveyed. Even so, the language descriptions are sketchy and the language styles are fairly diverse. Further, as with any collection of papers by a variety of authors, the writing styles vary, resulting in some chapters that are easy to read and others that require work. I therefore suggest this book only for someone who already has a good knowledge of a couple of functional languages.

The amount of concurrency that is possible in an execution depends on the algorithm used and the machine architecture. While theoretically each of these languages should be able to provide a large amount of the possible concurrency, only the Crystal paper has statistics showing how much of the potential is realized. The reason for this lack may be the large variation in maturity of these languages: they range from university research projects to languages in which large programs have been written in an industrial environment. Thus, the book does not provide performance information useful for selecting one of these languages. Rather, it surveys many of the available language and implementation options available. I recommend it to anyone designing or implementing a functional language for a parallel machine.

Reviewer:  E. Schneider Review #: CR115667
Bookmark and Share
 
Parallel Programming (D.1.3 ... )
 
 
Compilers (D.3.4 ... )
 
 
Concurrent, Distributed, And Parallel Languages (D.3.2 ... )
 
 
Parallel Processors (C.1.2 ... )
 
 
Applicative (Functional) Programming (D.1.1 )
 
Would you recommend this review?
yes
no
Other reviews under "Parallel Programming": Date
How to write parallel programs: a first course
Carriero N. (ed), Gelernter D. (ed), MIT Press, Cambridge, MA, 1990. Type: Book (9780262031714)
Jul 1 1992
Parallel computer systems
Koskela R., Simmons M., ACM Press, New York, NY, 1990. Type: Book (9780201509373)
May 1 1992
Data-parallel programming on MIMD computers
Hatcher P., Quinn M., MIT Press, Cambridge, MA, 1992. Type: Book (9780262082051)
Sep 1 1993
more...

E-Mail This Printer-Friendly
Send Your Comments
Contact Us
Reproduction in whole or in part without permission is prohibited.   Copyright 1999-2024 ThinkLoud®
Terms of Use
| Privacy Policy