Computing Reviews

Rethinking FTP:aggressive block reordering for large file transfers
Anastasiadis S., Wickremesinghe R., Chase J. ACM Transactions on Storage4(4):1-27,2009.Type:Article
Date Reviewed: 05/11/09

In the context of whole-file transfers, Anastasiadis et al. propose block reordering heuristics to maximize throughput by reducing disk traffic. First, already-cached blocks are transferred to all clients concurrently requesting that specific file. As a result, file blocks may be transferred out of order. Of course, the environments that benefit most from this heuristic are those where the disk is the bottleneck or where a big number of clients request the same file concurrently.

The authors analyze in detail alternative methods to maximize throughput, such as optimizing cache and block sizes. They evaluated their heuristic with the help of a prototype built on top of the file transfer protocol (FTP) daemon of the FreeBSD R4.5 operating system; both client and server were modified to support block reordering.

To test this prototype, a workload consisting of multiple clients was generated and divided into three groups, according to network link bandwidth: 1.544 megabits per second (Mb/s), 10 Mb/s, and 44.736 Mb/s. Then, they compared the execution results of their prototype, dubbed Circus, with those of an unmodified FreeBSD 4.5. In summary, as client requests (load) increase, Circus is better able to exploit network bandwidth. It is also capable of maintaining constant disk throughput and constant response times, while the standard software loses disk bandwidth and response times under the same circumstances. As file size increases, Circus is again better able to maintain network throughput and disk throughput.

Overall, this paper makes a strong case for block reordering for the scenarios investigated.

Reviewer:  Veronica Lagrange Review #: CR136812 (1001-0059)

Reproduction in whole or in part without permission is prohibited.   Copyright 2024 ComputingReviews.com™
Terms of Use
| Privacy Policy