High-profile bugs have increased the urgency of designing good program analysis frameworks that can detect such bugs before the programs are released into the wild. Luckily, we possess a great framework for the systematic design of analyses: abstract interpretation (AI). Unfortunately, a straightforward implementation does not scale to realistic programs.
A simple but powerful observation is at play in the framework presented here: even very large programs tend to modify a very small number of memory locations in any given block of code. So if we can take advantage of this “sparsity,” our analyses ought to scale much better. Of course, to take this observation, turn it first into sound theory, and then implement it requires quite a lot of careful work. This is exactly the kind of work that this paper describes.
The reader should be warned that the results are extremely technical. There are helpful examples throughout to illustrate the most important points, two sections on how to design various kinds of analyses on top of this framework, a helpful section on implementation, and a lucid section on the results of experimental evaluation of their implementation on a wide variety of programs. The authors cleverly relegated all proofs to appendices. Nevertheless, anyone not fully versed in the lore of abstract interpretation would find this paper extremely challenging. But for those who do know AI, this paper is well worth reading.
I was disappointed not to find a link to the authors’ software anywhere; it is past time that all papers evaluating an implementation provide the source code electronically for independent evaluation. It is also slightly disappointing that the proofs are not automated, but perhaps mainstream CS has not quite reached that stage yet. Lest the reader of this review misunderstand, let me be quite clear: this is a superb paper. Well written, well structured, and smoothly bridging theory and practice, it could serve as a model for writing solid CS papers.