(Translated by https://www.hiragana.jp/)
TCP Vegas - Wikipedia Jump to content

TCP Vegas

From Wikipedia, the free encyclopedia

TCP Vegas is a TCP congestion avoidance algorithm that emphasizes packet delay, rather than packet loss, as a signal to help determine the rate at which to send packets. It was developed at the University of Arizona by Lawrence Brakmo and Larry L. Peterson and introduced in 1994.[1][2]

TCP Vegas detects congestion at an incipient stage based on increasing Round-Trip Time (RTT) values of the packets in the connection unlike other flavors such as Reno, New Reno, etc., which detect congestion only after it has actually happened via packet loss. The algorithm depends heavily on accurate calculation of the Base RTT value. If it is too small then throughput of the connection will be less than the bandwidth available while if the value is too large then it will overrun the connection.

A lot of research is going on regarding the fairness provided by the linear increase/decrease mechanism for congestion control in Vegas. One interesting caveat is when Vegas is inter-operated with other versions like Reno. In this case, performance of Vegas degrades because Vegas reduces its sending rate before Reno, as it detects congestion early and hence gives greater bandwidth to co-existing TCP Reno flows.[3][4][5][6]

TCP Vegas is one of several "flavors" of TCP congestion avoidance algorithms. It is one of a series of efforts at TCP tuning that adapt congestion control and system behaviors to new challenges faced by increases in available bandwidth in Internet components on networks like Internet2.[7][8]

TCP Vegas has been implemented in the Linux kernel,[9] in FreeBSD,[10] in Solaris[11] and possibly in other operating systems as well.[citation needed]

See also

[edit]

References

[edit]
[edit]