Internet protocol suite |
---|
Application layer |
Transport layer |
Internet layer |
|
Link layer |
|
Linux auto-tuning details. Some automatic buffer tuning is implemented in Linux 2.4 (sender-side), and Linux 2.6 implements it for both the send and receive directions. In a post to the web100-discuss mailing list, John Heffner describes the Linux.
TCP tuning techniques adjust the network congestion avoidance parameters of Transmission Control Protocol (TCP) connections over high-bandwidth, high-latency networks. Well-tuned networks can perform up to 10 times faster in some cases.[1] However, blindly following instructions without understanding their real consequences can hurt performance as well.
Network and system characteristics[edit]
Bandwidth-delay product (BDP)[edit]
Bandwidth-delay product (BDP) is a term primarily used in conjunction with TCP to refer to the number of bytes necessary to fill a TCP 'path', i.e. it is equal to the maximum number of simultaneous bits in transit between the transmitter and the receiver.
High performance networks have very large BDPs. To give a practical example, two nodes communicating over a geostationary satellite link with a round-trip delay time (or round-trip time, RTT) of 0.5 seconds and a bandwidth of 10 Gbit/s can have up to 0.5×1010bits, i.e., 5 Gbit = 625 MB of unacknowledged data in flight. Despite having much lower latencies than satellite links, even terrestrial fiber links can have very high BDPs because their link capacity is so large. Operating systems and protocols designed as recently as a few years ago when networks were slower were tuned for BDPs of orders of magnitude smaller, with implications for limited achievable performance.
Buffers[edit]
The original TCP configurations supported TCP receive window sizebuffers of up to 65,535 (64 KiB - 1) bytes, which was adequate for slow links or links with small RTTs. Larger buffers are required by the high performance options described below.
Buffering is used throughout high performance network systems to handle delays in the system. In general, buffer size will need to be scaled proportionally to the amount of data 'in flight' at any time. For very high performance applications that are not sensitive to network delays, it is possible to interpose large end to end buffering delays by putting in intermediate data storage points in an end to end system, and then to use automated and scheduled non-real-time data transfers to get the data to their final endpoints.
TCP speed limits[edit]
Maximum achievable throughput for a single TCP connection is determined by different factors. One trivial limitation is the maximum bandwidth of the slowest link in the path. But there are also other, less obvious limits for TCP throughput. Bit errors can create a limitation for the connection as well as RTT.
Window size[edit]
In computer networking, RWIN (TCP Receive Window) is the amount of data that a computer can accept without acknowledging the sender. If the sender has not received acknowledgement for the first packet it sent, it will stop and wait and if this wait exceeds a certain limit, it may even retransmit. This is how TCP achieves reliable data transmission.
Even if there is no packet loss in the network, windowing can limit throughput. Because TCP transmits data up to the window size before waiting for the acknowledgements, the full bandwidth of the network may not always get used. The limitation caused by window size can be calculated as follows:
where RWIN is the TCP Receive Window and RTT is the round-trip time for the path.
At any given time, the window advertised by the receive side of TCP corresponds to the amount of free receive memory it has allocated for this connection. Otherwise it would risk dropping received packets due to lack of space.
The sending side should also allocate the same amount of memory as the receive side for good performance. That is because, even after data has been sent on the network, the sending side must hold it in memory until it has been acknowledged as successfully received, just in case it would have to be retransmitted. If the receiver is far away, acknowledgments will take a long time to arrive. If the send memory is small, it can saturate and block emission. A simple computation gives the same optimal send memory size as for the receive memory size given above.
Packet loss[edit]
When packet loss occurs in the network, an additional limit is imposed on the connection.[2] In the case of light to moderate packet loss when the TCP rate is limited by the congestion avoidance algorithm, the limit can be calculated according to the formula (Mathis, et al.):
where MSS is the maximum segment size and Ploss is the probability of packet loss. If packet loss is so rare that the TCP window becomes regularly fully extended, this formula doesn't apply.
TCP options for high performance[edit]
A number of extensions have been made to TCP over the years to increase its performance over fast high-RTT links ('long fat networks' or LFNs).
TCP timestamps (RFC 1323) play a double role: they avoid ambiguities due to the 32-bit sequence number field wrapping around, and they allow more precise RTT estimation in the presence of multiple losses per RTT. With those improvements, it becomes reasonable to increase the TCP window beyond 64 kB, which can be done using the window scaling option (RFC 1323).
The TCP selective acknowledgment option (SACK, RFC 2018) allows a TCP receiver to precisely inform the TCP sender about which segments have been lost. This increases performance on high-RTT links, when multiple losses per window are possible.
Path MTU Discovery avoids the need for in-network fragmentation, increasing the performance in the presence of packet loss.
See also[edit]
References[edit]
- ^'High Performance SSH/SCP - HPN-SSH'. Psc.edu. Retrieved January 23, 2020.
- ^'The Macroscopic Behavior of the TCP Congestion Avoidance Algorithm'. Psc.edu. Archived from the original on May 11, 2012. Retrieved January 3, 2017.
External links[edit]
- RFC 1323 - TCP Extensions for High Performance
- RFC 2018 - TCP Selective Acknowledgment Options
- RFC 2582 - The NewReno Modification to TCP's Fast Recovery Algorithm
- RFC 2488 - Enhancing TCP Over Satellite Channels using Standard Mechanisms
- RFC 2883 - An Extension to the Selective Acknowledgment (SACK) Option for TCP
- RFC 3517 - A Conservative Selective Acknowledgment-based Loss Recovery Algorithm for TCP
- RFC 4138 - Forward RTO-Recovery (F-RTO): An Algorithm for Detecting Spurious Retransmission Timeouts with TCP and the Stream Control Transmission Protocol (SCTP)
- TCP Tuning Guide, ESnet
- DrTCP - a utility for Microsoft Windows (prior to Vista) which can quickly alter TCP performance parameters in the registry.
- Information on 'Tweaking' your TCP stack, Broadband Reports
- TCP/IP Analyzer, speedguide.net
- NTTTCP Network Performance Test Tool, Microsoft Windows Server Performance Team Blog
- Best Practices for TCP Optimization - ExtraHop
I‘ve two servers located in two different data center. Both server deals with a lot of concurrent large file transfers. But network performance is very poor for large files and performance degradation take place with a large files. How do I tune TCP under Linux to solve this problem?
By default the Linux network stack is not configured for high speed large file transfer across WAN links. This is done to save memory resources. You can easily tune Linux network stack by increasing network buffers size for high-speed networks that connect server systems to handle more network packets.
Linux Tcp Auto Tuning Kit
The default maximum Linux TCP buffer sizes are way too small. TCP memory is calculated automatically based on system memory; you can find the actual values by typing the following commands:$ cat /proc/sys/net/ipv4/tcp_mem
The default and maximum amount for the receive socket memory:$ cat /proc/sys/net/core/rmem_default
$ cat /proc/sys/net/core/rmem_max
The default and maximum amount for the send socket memory:$ cat /proc/sys/net/core/wmem_default
$ cat /proc/sys/net/core/wmem_max
The maximum amount of option memory buffers:$ cat /proc/sys/net/core/optmem_max
Linux Tcp Auto Tuning Kit
Tune values
Linux Tcp Auto Tuning Download
Set the max OS send buffer size (wmem) and receive buffer size (rmem) to 12 MB for queues on all protocols. In other words set the amount of memory that is allocated for each TCP socket when it is opened or created while transferring files:
# echo 'net.core.wmem_max=12582912' >> /etc/sysctl.conf
# echo 'net.core.rmem_max=12582912' >> /etc/sysctl.conf
You also need to set minimum size, initial size, and maximum size in bytes:# echo 'net.ipv4.tcp_rmem= 10240 87380 12582912' >> /etc/sysctl.conf
# echo 'net.ipv4.tcp_wmem= 10240 87380 12582912' >> /etc/sysctl.conf
Turn on window scaling which can be an option to enlarge the transfer window:# echo 'net.ipv4.tcp_window_scaling = 1' >> /etc/sysctl.conf
Enable timestamps as defined in RFC1323:# echo 'net.ipv4.tcp_timestamps = 1' >> /etc/sysctl.conf
Enable select acknowledgments:# echo 'net.ipv4.tcp_sack = 1' >> /etc/sysctl.conf
By default, TCP saves various connection metrics in the route cache when the connection closes, so that connections established in the near future can use these to set initial conditions. Usually, this increases overall performance, but may sometimes cause performance degradation. If set, TCP will not cache metrics on closing connections.# echo 'net.ipv4.tcp_no_metrics_save = 1' >> /etc/sysctl.conf
Set maximum number of packets, queued on the INPUT side, when the interface receives packets faster than kernel can process them.# echo 'net.core.netdev_max_backlog = 5000' >> /etc/sysctl.conf
Now reload the changes:# sysctl -p
Use tcpdump to view changes for eth0:# tcpdump -ni eth0
Recommend readings:
- Please refer to kernel documentation in Documentation /networking/ip-sysctl.txt for more information.
- man page sysctl
Linux Tcp Auto Tuning Software
ADVERTISEMENTS