By accounting for the mismatch in energy consumption by different cloud servers to process the same task, this paper proposes a method to save energy. The authors introduce a priced timed automaton to model the optimization problem. In this automaton, price represents the power that each state of the cloud server consumes, and time represents the duration spent at each state. Tasks are modeled using a database availability group (DAG) showing the dependencies. The authors then define rules to reduce the automaton in size and suggest an algorithm to generate the automaton based on the specifics of the cloud server and the tasks. Finally, they propose another algorithm to find the minimum energy paths in the suggested automaton.
The idea of energy saving in data centers has been around for a while. I believe there is a close similarity between the goals of this work and that line of research. Many of these works rely on resource allocation literature. In general, a literature survey can be more extensive and cover more cases.
Another suggestion for the authors is to consider the online case. In an actual data center, tasks arrive in real time, and it would be interesting to consider extensions of this work in real-time scenarios.
A final concern is the experiment setting; it would be insightful if the authors described the methodology for choosing their simulation settings and how such an implementation would react to the real workload of a data center.