Computing Reviews
Today's Issue Hot Topics Search Browse Recommended My Account Log In
Review Help
Search
Run-time support for the automatic parallelization of Java programs
Chan B., Abdelrahman T. The Journal of Supercomputing28 (1):91-117,2004.Type:Article
Date Reviewed: Dec 16 2004

An automatic runtime parallelization method for Java programs is described in this paper. The approach is basically that, for each method invocation in a Java program, a separate asynchronous thread is created as the basis for parallel execution. A shared memory system is assumed as the underlying machine model. To cope with the correctness of thread execution, so-called data access summaries are created, associated with every method in the program. This is done in two steps. In the first step, the compiler generates symbolic access paths for all global objects that are passed as actual parameters to a method. Not all information for these access paths is known at compile time, so, consequently, this must be deferred to runtime. In the second step, the access paths are used to construct data access summaries that specify the read and write set of a method. At runtime, these summaries are used to create a registry that is used to indicate the shared variables, and the threads that use them. All is implemented in zJava, which works within the normal Java Virtual Machine (JVM) context.

The paper is very well written, and a concise description of every step is given. A small set of benchmarks is provided to illustrate the effectiveness of the method. There are a number of observations one can make. First of all, the basis of parallelism is the function (or method) call. This requires that all parallelism also be represented in the program at the level of function calls. This is usually not the case in scientific codes. There, the parallelism is usually identified at the loop level. To be able to use the authors’ system, an application must be restructured to fit the method calling mechanism. The matrix multiply example in the paper is a nice illustration of this. The Java program effectively encapsulates rows and columns into objects just to do this. This implies that, if the programmer makes a Java program for matrix multiplication in the usual way, the zJava approach will not work.

A second observation (that is also made by the authors) is that the registry itself is a shared data structure that can only be accessed atomically. This is a source of sequentialization, and, together with other overheads in the runtime system, may seriously deteriorate performance. The paper provides some measurements of the induced overhead, but the authors were not able to break down the various sources of this overhead, due to the lack of appropriate profiling tools. This is a pity, because with such centralized components as a registry, one would like to know what the intrinsic limitations to scalability of the proposed solution are. Another observation is that some of the generation of data summaries and other transformations were done by hand, because this has not been implemented yet. Although this is sometimes acceptable in a research context, it has the danger of overlooking a possibly complicated compiler analysis necessary to perform these transformations.

Reviewer:  H. Sips Review #: CR130541 (0505-0580)
Bookmark and Share
 
Parallel Programming (D.1.3 ... )
 
 
Concurrent, Distributed, And Parallel Languages (D.3.2 ... )
 
 
Java (D.3.2 ... )
 
 
Optimization (D.3.4 ... )
 
 
Parallelism And Concurrency (F.1.2 ... )
 
 
Language Classifications (D.3.2 )
 
  more  
Would you recommend this review?
yes
no
Other reviews under "Parallel Programming": Date
How to write parallel programs: a first course
Carriero N. (ed), Gelernter D. (ed), MIT Press, Cambridge, MA, 1990. Type: Book (9780262031714)
Jul 1 1992
Parallel computer systems
Koskela R., Simmons M., ACM Press, New York, NY, 1990. Type: Book (9780201509373)
May 1 1992
Parallel functional languages and compilers
Szymanski B. (ed), ACM Press, New York, NY, 1991. Type: Book (9780201522433)
Sep 1 1993
more...

E-Mail This Printer-Friendly
Send Your Comments
Contact Us
Reproduction in whole or in part without permission is prohibited.   Copyright 1999-2024 ThinkLoud®
Terms of Use
| Privacy Policy