The performance of an end-to-end protocol such as transmission control protocol (TCP) degrades over lossy long-haul data links; this is a consequence of the delays in passing acknowledgment packets back to the source.
Performance can be improved through the use of one or more relays that accept and acknowledge data packets, and then pass them toward the destination. The advantage of using such a mechanism is that it is transparent to the end systems. There is a possibility that a packet acknowledged in this fashion will get dropped before it reaches its destination, but this should be detected by application-layer protocols.
The authors provide some excellent diagrams that illustrate how packets travel through conventional and accelerated networks. These diagrams show how a buffer is used to hold packets that have not been acknowledged by a downstream node.
A network simulator was used to predict the effects of accelerator node placement, number of accelerator nodes, connection duration, and TCP processing delay. It is shown that acceleration can significantly increase throughput over lossy or congested links. The authors note that using TCP acceleration may be considered unfair; however, it will become a necessary requirement for companies that wish to remain competitive.
A practical implementation of the simulated system was done using an Intel IXP2350 network processor. There are some processing delays for data passing through this device, but they are insufficient to impact the predicted results. The paper concludes with some considerations relating to practical placement of accelerator nodes.
A number of commercial accelerators have become available since this paper was written. Those who plan to deploy such devices would do well to read this paper before they commit.