Learn about TCP BBR, a fast, reliable networking algorithm for Linux.
Did you know there are new networking algorithms that can give you as much as a 2x boost in your network performance? Enter BBR, an algorithm enabled on SatoshiVPN servers that we only recently learned about while attempting to boost the connection speeds for our customers.
When it comes to network performance for any application—especially VPNs—every millisecond counts, and especially in poor connectivity areas. The popular and traditional TCP protocol tries its best to balance between being fast and being fair, with much more emphasis on being “fair”. For example, as packet loss on your network increases, TCP’s back-off algorithms will reduce the performance of each user so everyone can share the transmission pipe, but fine-tuning this balance is a little bit tricky. With TCP’s current back-off algorithms, even if you are the only person using the pipe, TCP may underutilize your network connection by as much as 50%. This is where TCP BBR comes in. It’s an updated back-off algorithm that’s designed to respond to actual network congestion, rather than simple packet loss. This means that BBR is focused on improving network performance in your worstcase scenario when your network isn’t very good or you are out of coverage. In this article we'll cover exactly how BBR does this by answering the following questions:
• What is the TCP BBR protocol?
• What is the importance and benefits of BBR?
• How does BBR protocol operate?
So, Let’s dig deep and discover more about the mystery of TCP BBR.
The internet is built based on a basic approach that assumes the network consists of a bunch of nodes and edges or links. When a packet traverses the network, it passes from one node to another. The recipient node is responsible for selecting the next node to forward the packet and to move closer to the required destination. If the selected path to the destination is busy, the packet will stay in a queue and wait to be processed later. If the queue is full, then the packet is dropped. This is called network congestion, which occurs when a network node or link is carrying more data than it can handle.
Today’s internet is distributed across billions of devices, which establish billions of connections at a time ranging from wired connections to wireless. This increases the possibility of network congestions. As a result, a congestion control algorithm is a necessity to solve such matters. This is why TCP BBR was developed. TCP BBR is a TCP congestion control algorithm built for the congestion of the modern internet. It stands for Transmission Control Protocol Bottleneck Bandwidth and Round-trip propagation time (TCP BBR), and it was developed by Google in 2016.
TCP is a common transport layer protocol. Its operation is to send the data as fast as possible in a way that doesn’t affect the buffer queues and cause too much delay or packet loss. It is worth noting that TCP is an adaptive rate control algorithm, which means that it tries to check how fast the network goes between the sender and the receiver. So, TCP works as a pacing protocol, but how exactly does TCP adjust the rate of packets?
While packets traverse through the network between two nodes with no problems, meaning no packets got lost, TCP will gradually increase its sending rate. Because TCP knows the amount of time between the sender and the receiver, which called the round-trip time, TCP will put one more packet on the network each round-trip time interval. That's how TCP increases its rate slowly. At some point, packets won’t get to the receiver. They will land up in a buffer, however, the sender will continue to be sending at the same rate, so as long as packets continue traveling, the buffer starts to be filled quickly. Once the buffer is full, and there’s a new packet coming, the packet will be dropped, and an acknowledgment signal will be sent to the sender to say "you have been sending too fast and you have to decrease your sending rate".
The main advantage of using TCP protocol is that TCP gives us guaranteed packet delivery and flow control. Without flow control, the internet would collapse. Different flow control algorithms have been implemented and used in the TCP stack, such as Reno, Tahoe, Vegas, Cubic, Westwood, and recently BBR. The goal of these algorithms is determining how fast the sender should send data while adapting to network changes.
While most congestion control algorithms are loss-based and delay-based, because they rely on packet loss or delay as signals to lower transmission rates, TCP BBR is a model-based. It uses the maximum bandwidth and round-trip time to build a specific model of the network. This algorithm not only achieves significant bandwidth improvements but also lower latency, as it prevents the creation of queues, keeping the delay minimal.
As the internet networks develop, more bandwidth than ever has become available. But this only makes a difference if the application you are using is prepared to make use of the extra bandwidth available. BBR takes advantage of ever-increasing bandwidth availability and uses latency, instead of lost packets, as a primary factor to determine the sending rate. With BBR, you can get significantly better throughput and reduced latency. This is because BBR attempts not to fill the buffers, but instead avoiding buffer bloat altogether. BBR runs purely on the sender and does not require changes to the protocol, receiver, or network, making it incrementally deployable. It depends only on round-trip-time and packet-delivery acknowledgment. So, it can be implemented for most internet transport protocols.
The BBR algorithm differs from other algorithms in that it does not pay attention to packet loss. Instead, its primary metric is the actual bandwidth of data delivered to the far end. Whenever an acknowledgment packet is received, BBR updates its measurement of the amount of data delivered. The sum of data delivered over some time is a good indicator of the bandwidth the connection can provide because the connection has demonstrably provided that bandwidth recently.
When a connection starts up, BBR will be in the "startup" state. In this mode, BBR behaves like most traditional congestion-control algorithms in that it starts slowly, but quickly ramps up the transmission speed in an attempt to measure the available bandwidth. Most algorithms will continue to ramp up until they face a dropped packet, while BBR instead will watch the bandwidth measurement. In particular, it looks at the delivered bandwidth for the last three round-trip times to see if it changes. Once the bandwidth stops rising, BBR concludes that it has found the effective bandwidth of the connection and can stop ramping up. This provides a natural "buffer" before packet loss would begin.
After start up, the measured bandwidth is deemed to be the rate at which packets should be sent over the connection. However, in measuring that rate, BBR transmits packets at a higher rate for a while; some of them will be sitting in queues waiting to be delivered. In an attempt to drain those packets out of the buffers, BBR will go into a "drain" state. During this state, BBR will transmit below the measured bandwidth until it has made up for the excess packets sent before. Once the drain phase is done, BBR goes into the steady-state mode where it transmits at more-or-less the calculated bandwidth. That is "more-or-less" because the characteristics of a network connection will change over time, so the delivered bandwidth must be continuously monitored. Also, an increase an effective bandwidth can only be detected by occasionally trying to transmit at a higher rate. So, BBR will scale the rate up. As a result, BBR will provide enhanced network utilization and noticeably faster performance.