Computer science has long been different from other scientific disciplines in that the reproducibility of experimental methods and results is not strongly emphasized in its scholarly publications. A consequence of this is that derivative works that require directly reproducing other works are quite rare. Anyone that has tried to directly reproduce another’s results has likely experienced confusion, frustration, and perhaps even disappointment in how difficult the process can be.
This survey by Madougou et al. is especially compelling because they attempt to empirically evaluate not one but 12 different approaches to performance modeling specific to graphics processing unit (GPU) computing. They utilize consistent and well-understood benchmark applications to evaluate the efficacy of each model, and provide pros and cons for each. Relatively modern hardware is used for each evaluation. They evaluate the accuracy, implementation effort, level of abstraction, hardware knowledge required by the user, and insights each model can provide.
The primary contribution of this paper is a thorough categorization and thoughtful taxonomy of the landscape of modern GPU performance models. The authors also propose a simple goal-based taxonomy for performance modeling. The authors are to be commended for their diligence in reproducing others’ work. This work is valuable for GPU programmers of any level looking to achieve maximum performance of their applications on specific hardware resources. The conclusion that the ideal GPU performance model does not yet exist is not entirely unexpected, but does offer researchers in this area some guidance on the necessary properties such an ideal performance model for GPUs will possess.