Time Warp is an optimistic synchronization protocol in parallel simulation computations. At runtime, it detects out-of-sequence events and recovers by rolling back the calculation to properly account for the events. Time Warp has two major problems: excessive amounts of wasted, rolled back computation, and inefficient use of memory, leading to poor performance of virtual memory or cache systems. Das and Fujimoto present an adaptive memory management system that can cope with both of these problems. The solution monitors the execution of the Time Warp program and, based on the runtime data collected, automatically adjusts the amount of memory used to reduce the Time Warp overhead. It requires only a modest amount of memory beyond that required for sequential execution.
The authors thoroughly review the Time Warp method, the nature of the problems discovered, and previous proposals to deal with them. They remark that the two problems mentioned above were considered independently in the past. Their contribution is a method by which both problems can be solved using a single approach based on the Cancelback protocol. The Cancelback protocol is presented in terms of quantitative measures of the usage of the memory buffers and estimated times of moving and processing data among them. These measures comprise the data gathered in the Time Warp program, and they enable the system to operate flexibly with widely varying workloads in a fully adaptive manner.
Two test problems are used to demonstrate and illustrate the adaptive memory management method--a symmetric homogeneous workload (PHOLD) and an open asymmetric flow network based on electric power grids. A third application, a personal communication service network, is also discussed as a benchmark case. All of the computational experiments are presented in detail.