The authors describe an algorithm that takes a sequential, structured FORTRAN program as input and produces parallel call-end statements for subroutines that can be executed in parallel. The presentation emphasizes the identification of those subroutines that can execute in a synchronous parallel mode.
This algorithm incorporates and outlines many of the standard compiler code optimization techniques: analysis of call and program dependence graphs, inlining, intraprocedural constant propagation, and data dependency analysis of array indices. The information gathered is used to identify which subroutines can be restructured to incorporate parallel tasks and synchronization primitives. Only those procedure calls having identical control dependencies can be considered candidates for the same reasons that apply to sequential control statements.
The reference list contains the important advances in compiler code optimization techniques. The extensive analysis required to identify synchronizable subroutine calls in addition to the massive overhead incurred from using library calls for synchronization management makes a potential performance increase highly unlikely.