Computing Reviews

May-happen-in-parallel analysis for actor-based concurrency
Albert E., Flores-Montoya A., Genaim S., Martin-Martin E. ACM Transactions on Computational Logic17(2):1-39,2015.Type:Article
Date Reviewed: 04/28/16

When we run a program on a modern compiler, or in a browser, what is it that makes the program execute reasonably quickly, and without using unreasonable amounts of space, or, if we are running on a smartphone, without using too much energy? Partly it is the increasing efficiency of microprocessors themselves--at least until a few years ago--and partly the increased sophistication of the compile time analysis and the optimizations that this underpins. We have, perhaps, reached a crucial point in the road. Raw processor speeds are no longer increasing; instead, more cores are available. Programming for multicore with traditional languages, and traditional approaches to concurrency such as threads sharing the same memory space are proving to be difficult.

To get a sense of the value of actor-based concurrency, it is worth looking back at the work in Ericsson’s computer science lab in the late 1980s to find the ideal language for programming complex telecom switches. These are systems that need to be fault tolerant (in the face of hardware and software failure), highly available, and capable of handling thousands of simultaneous calls. No existing language was suitable, so Erlang was born; at the heart of Erlang is actor-based concurrency. This was not because systems were going to run on parallel hardware, but because the share-nothing, lightweight, communicating processes provided the ideal abstraction and design framework for communications-intensive systems. Roll forward to the present day and Erlang is still supporting such systems, most notably as the infrastructure powering the WhatsApp messaging application.

In this work, Albert et al. investigate a model in which invoking a task on one actor can transitively generate tasks on other actors, which in turn may transitively generate further tasks. Their aim is to answer the question of when particular tasks may be run in parallel. By analyzing method-by-method interactions--a local analysis--they are then able to infer task- and application-level information. This, in turn, can be an essential input into other analyses, such as those circumscribing resource construction.

While this powerful and elegant analysis is presented for a theoretical calculus, the work is also transferred to a prototype tool for an abstract behavioral specification (ABS) system, which in turn can generate code for practical languages such as Java and Scala. It is to be hoped that developers of language implementations will in due course incorporate work such as this in their systems.

Reviewer:  Simon Thompson Review #: CR144365 (1607-0523)

Reproduction in whole or in part without permission is prohibited.   Copyright 2024 ComputingReviews.com™
Terms of Use
| Privacy Policy