Computing Reviews
Today's Issue Hot Topics Search Browse Recommended My Account Log In
Review Help
Search
Optimizing memory usage in the polyhedral model
Quilleré F., Rajopadhye S. ACM Transactions on Programming Languages and Systems22 (5):773-815,2000.Type:Article
Date Reviewed: Nov 1 2001

Quilleré and Rajopadhye address the problem of memory size optimization when compiling Systems of Affine Recurrence Relations (SARE), defined over polyhedral index domains. The formalism to do this, detailed in the paper, is referred to as the polyhedral model. It is presented as being useful for loop parallelization and systolic synthesis. Alpha, a functional language developed by researchers addressing similar problems of parallel execution, is explained and used throughout the paper to illustrate key points and to prove feasibility claims.

Section 2 characterizes Alpha by showing the source-to-source transformations from functional program to imperative C output. Section 3 presents the mathematical foundation for analysis and proofs that appear later in the text, focusing on the lifetime function. Section 4 defines the constraints that a memory function must satisfy to prepare for pseudo projections, which are shown in section 5. These pseudo projections allow significant memory savings. Proof of the tight bounds on the number of linearly independent projection vectors is shown in section 6, while section 7 discusses further, secondary level memory savings. Section 8 includes examples, and finally section 9 is intended to show the applicability to loop parallelization. Related work is summarized in section 10, pointing out weaknesses in SISAL and Haskell, while section 11 contains conclusions.

Although direct compilation of functional languages such as Alpha requires unacceptably large storage, the authors show how this prohibitive requirement can be reduced. For example, the typical 2-D storage requirement for a local object can be scaled down to a small memory space, often a single cell or machine register. This is exemplified with the Fibonacci function. The core contribution of the paper is a generalization of this memory reduction and the accompanying mathematical formalization of how and when this reduction is possible. The authors contend that “the primary criterion for optimality is the number of linearly independent projection vectors.”

There are notable weaknesses in the paper. For one thing, it is far too long for the volume and type of findings it describes. It is also heavily loaded with mathematical derivations, even when the point to be proven is not especially difficult or essential. More importantly, section 9 should have shown how the polyhedral model applies to parallel code generation, but that promise was not fulfilled. After all, the real goal of multiprocessing research is the simultaneous use of multiple computing resources for the benefit of fast execution, not how to save memory that was wasted due to some (single assignment or functional language) compute model. Several key ideas announced in the introduction are not expanded in the paper. For example, the relevance to systolic synthesis and adaptability to other areas of computing was only sketched. Automatic code emission for systolic systems is hard, so some principles for parallel code emission would have been welcome and would have fitted in well with the subject matter. The greatest disappointment is the assumption from the start by the authors that the parallel schedule for an algorithm is determined a priori, before transformations by Alpha start.

Despite these shortcomings, the paper is a must-read for researchers and developers of parallel compute systems that use single-assignment or functional language models. However, given the enormous hunger for memory in functional languages, the paper actually scares me away and directs me toward more conventional, imperative programming environments.

Reviewer:  Herbert G. Mayer Review #: CR125517 (0111-0412)
Bookmark and Share
 
Array And Vector Processors (C.1.2 ... )
 
 
Compilers (D.3.4 ... )
 
Would you recommend this review?
yes
no
Other reviews under "Array And Vector Processors": Date
Array processing machines: an abstract model
van Leeuwen J., Wiedermann J. BIT 27(1): 25-43, 1987. Type: Article
Mar 1 1988
A unified approach to a class of data movements on an array processor
Flanders P. IEEE Transactions on Computers 31(9): 809-819, 1982. Type: Article
May 1 1985
Gracefully degradable processor arrays
Fortes J., Raghavendra C. IEEE Transactions on Computers 34(11): 1033-1044, 1985. Type: Article
Nov 1 1986
more...

E-Mail This Printer-Friendly
Send Your Comments
Contact Us
Reproduction in whole or in part without permission is prohibited.   Copyright 1999-2024 ThinkLoud®
Terms of Use
| Privacy Policy