Cloud scheduling plays an important role in making cloud resources available by satisfying certain requests and service-level agreements (SLAs). In this paper, the authors present an interesting shift to virtual machine (VM) placement for infrastructure as a service (IaaS) and VM profiling for platform as a service (PaaS).
IaaS represents a basic cloud service model in which cloud providers offer physical and virtual machines, storage, firewalls, load balancers, and networks. In the PaaS model, providers deliver an entire computing platform that mostly includes operating systems, programming language execution environments, databases, and Web servers.
A series of experiments was carried out to test virtualized workloads for a multicore setting. The authors present the performance limitations of current schedulers because of cache and network sharing, and then describe two innovative methods that address these challenges: an IaaS scheduler that “determines how to map requested virtual resources onto available physical resources,” and a VM-level scheduler that “chooses the most adequate VM profiles” at the PaaS level. These results are based on two experiments: one was run on one machine and focused on the effect of core pinning, and the second was run in a cluster and presented workload distribution.
One important contribution of this paper is that it identifies issues related to performance degradations in the common case of “populating nodes in a round-robin fashion, until all cores are assigned by a local scheduler.” The authors argue that “declaring usage modes at request submission” improves scheduling decisions and performance with respect to “the spatial distribution of VMs within a cluster and within a node.” Thus, scheduling algorithms should consider the cloud architecture model to improve performance.