The authors propose a rule-driven service coordination middleware inspired by chemical processes. Their hypothesis is that scientific applications executed in a distributed environment can benefit from features of the execution infrastructure such as parallelism, autonomic control, and nondeterminism. The proposed strategy consists of defining the set of rules that governs the application workflow, creating the web services that embed the rules, and then starting execution.
The model itself, inspired by chemical processes, is very similar to the data flow model where data availability triggers the execution. In this respect, there is no novelty in this approach. Moreover, the model doesn’t provide for inherent characteristics of a distributed system like communication delays and errors, possible load imbalance across the system, change of service availability, and configuration. These are important limits that can have an impact on the viability of this kind of model.
The paper includes the new reference architecture with three different implementations: one centralized, one implementing the tuple-space concept, and the third a peer-to-peer model. Three different applications are run on these architectures plus two systems selected from similar works [1,2]. Only the execution time is considered as a criterion for discussing the value of the model. The cost of the overhead and how resources are optimally allocated and used are not analyzed, making the evaluation less significant.
Summing up, this paper promotes the rule-based middleware model inspired by chemical processes, a model that has some merits but would need additional mechanisms for managing more dynamic distributed execution environments.