Computing Reviews
Today's Issue Hot Topics Search Browse Recommended My Account Log In
Review Help
Search
A survey on compiler autotuning using machine learning
Ashouri A., Killian W., Cavazos J., Palermo G., Silvano C. ACM Computing Surveys51 (5):1-42,2019.Type:Article
Date Reviewed: Dec 31 2019

Most modern compilers contain an optimization phase that tries to optimize the emitted machine code with respect to different objectives. Often this phase is heuristic, and the heuristics can usually be selected or influenced by programming flags. But finding the best combination is tedious, and in some situations can lead to code that performs worse than the unoptimized code.

Java systems often have just-in-time compilation and take data from the currently running bytecode interpretation into consideration. All these approaches are encompassed by the term “autotuning.” In recent years, several researchers have been experimenting with applying methods from machine learning to better guide this optimization phase, and this survey provides a categorization and overview of current approaches.

The paper starts by explaining the problem, and provides an overview of the current state of autotuning and the organization of the survey. Essentially, the survey categorizes the research material across different dimensions that are then elaborated in the sections that follow.

The first dimension divides the material according to the general optimization problem being addressed. The first of these is the selection problem: choosing an optimization rule from a set of applicable rules. The second is the phase-ordering problem: the order in which applicable optimization rules should be applied.

The next dimension distinguishes which method is used to analyze the application (static analysis, dynamic analysis, and so on). Then it investigates different machine learning methods, clustered into supervised learning methods, unsupervised learning methods, and reinforcement learning.

The fourth dimension considers the different types of predictions used, such as clustering, speed-up predictors, and sequence predictors. The following dimension addresses the way the search space of possible compiler optimizations is explored.

Which architecture (embedded, workstation, high-performance computing) and which compiler (GCC, LLVM, Java, JIT, and so on) are targeted is the seventh domain. The final dimension is the influence of the paper.

The paper closes with a discussion and an extensive bibliography. This survey is an excellent starting point for anyone interested in this area, which will certainly grow as new and better machine learning techniques become available.

Due to the nature of a survey, the authors cannot provide an in-depth look at the techniques; however, they manage to bring across the general ideas and characteristics, and with the help of the categorizations it should be easy for readers to find the right papers to delve deeper.

Reviewer:  Markus Wolf Review #: CR146822 (2005-0112)
Bookmark and Share
 
Compilers (D.3.4 ... )
 
 
Learning (I.2.6 )
 
 
Introductory And Survey (A.1 )
 
Would you recommend this review?
yes
no
Other reviews under "Compilers": Date
An architecture for combinator graph reduction
Philip John J., Academic Press Prof., Inc., San Diego, CA, 1990. Type: Book (9780124192409)
Feb 1 1992
Crafting a compiler with C
Fischer C., Richard J. J., Benjamin-Cummings Publ. Co., Inc., Redwood City, CA, 1991. Type: Book (9780805321661)
Feb 1 1992
A methodology and notation for compiler front end design
Brown C., Paul W. J. Software--Practice & Experience 14(4): 335-346, 1984. Type: Article
Jun 1 1985
more...

E-Mail This Printer-Friendly
Send Your Comments
Contact Us
Reproduction in whole or in part without permission is prohibited.   Copyright 1999-2024 ThinkLoud®
Terms of Use
| Privacy Policy