Concurrency remains a hard problem. Efficient concurrency is even harder. But what if there were a setting where writing correct concurrent programs was significantly easier, at a very modest cost in efficiency? Too good to be true, right?
But this is exactly what the authors show for Haskell and similar languages, which naturally promote minimal use of shared heap mutation. The most fundamental invariant in this setting is sequential consistency (SC), a very natural memory model to program with. The authors modify GHC, the main Haskell compiler, to enforce SC. They then extensively benchmark this in a variety of situations, and convincingly show that their solution not only works, but its overhead is remarkably small.
One sometimes finds, buried behind a technically accurate title, a superb paper. While not a fundamental breakthrough, this paper is nevertheless scholarship at its best: well written, carefully motivated, based on extremely precise (and fully spelled out) theory, implemented in a real setting, and aggressively benchmarked in an adversarial manner (the authors go out of their way to show worst-case results). And as is increasingly standard, the full source is provided. They also go beyond this, proving preloaded virtual machines to allow independent verification. I will use this paper as an exemplar to my graduate students.
Unsurprisingly, such work did uncover a couple of bugs in GHC (as documented in the paper). While the paper says that these would be fixed by GHC 8.2, it turns out that the bug fixes were already implemented in version 8.0.2, as a bit of spelunking in the public bug database reveals.
Any referee depressed with all of the rotten papers they’ve been forced to read can gain a respite by reading this great paper, to remind them that there are excellent papers out there--whether or not they have any particular interest in Haskell or concurrency.