Ever been in the middle of a crucial video conference and your screen freezes? Or maybe you're trying to download a large file, and it feels like it's taking forever. Often, the culprit behind these frustrating moments is network congestion.
In this blog, we're taking a deep dive into the world of congestion control. We'll explore how it keeps the Internet from collapsing under the weight of our insatiable data hunger, how different protocols like TCP and UDP handle it, and why understanding this is crucial in the age of video conferencing, streaming, and real-time gaming.
What is Congestion Control and why should you care?
Congestion control is a mechanism that prevents network collapse by controlling the amount of data that flows through the network. When too much data is sent at once, network performance can degrade, leading to packet loss, increased latency, and even a network crash.
Why is it Important?
Maxing Out Resources: Imagine bandwidth as a narrow road. The more cars (data packets) you cram in, the worse the traffic. Congestion control acts like a traffic cop, ensuring a smooth flow and avoiding jams.
Equality for All: In a network, multiple data streams vie for the same resources. Without checks and balances, one stream could monopolize all the bandwidth, leaving others high and dry. Congestion control keeps things fair. Example: Let's say you're downloading a large file while also having a video call. Fairness in congestion control ensures that both activities get enough network resources so that the file download doesn't make your video call quality suffer, and vice versa.
End-User Satisfaction: At the end of the day, the user experience is king. Effective congestion control leads to quicker downloads, smoother video streaming, and a happier you.
Why TCP and UDP matter: The building blocks of your online life
You may not realize it, but almost everything you do online is made possible by either TCP, UDP, or both. These two protocols are like the freight and passenger trains of the Internet—each designed for specific kinds of cargo and destinations.
What are TCP and UDP?
TCP (Transmission Control Protocol) is like a reliable freight train, ensuring that all your 'goods' (data) arrive intact and in the correct order. It's the backbone of many services you rely on daily—web browsing, email, file transfers, and more.
UDP (User Datagram Protocol) is the speedier passenger train, where getting to the destination quickly is more important than ensuring all 'passengers' (data packets) are accounted for. It's commonly used in real-time applications like video conferencing, live streaming, and online gaming.
TCP Congestion Control
TCP (Transmission Control Protocol) is a connection-oriented protocol that ensures reliable data delivery. It uses several algorithms to manage congestion control.
Slow Start: The Slow Start algorithm is aptly named because it starts by sending a small number of packets, typically governed by a "congestion window" size. After each acknowledgement, the window size doubles. This exponential growth continues until it reaches a threshold, often called the "ssthresh" (Slow Start Threshold).
Congestion Avoidance: Once the ssthresh is reached, the Congestion Avoidance algorithm takes over. Instead of doubling the window size, it increases it linearly to probe for the available bandwidth. If packet loss occurs, the ssthresh is updated, and the protocol reverts to Slow Start.
Fast Recovery and Fast Retransmit: These mechanisms help TCP recover from packet loss without collapsing the congestion window size entirely. Fast Recovery keeps the window size constant in the event of a loss, while Fast Retransmit re-sends the lost packet without waiting for a timeout.
RFCs for reference
BIC and CUBIC: These are Linux-specific TCP congestion control algorithms designed for high-speed networks.
Compound TCP: Developed by Microsoft, it combines delay-based and loss-based congestion control.
Machine Learning for Congestion Control: Researchers are exploring the use of machine learning algorithms to dynamically adjust congestion control parameters.
UDP Congestion Control
UDP (User Datagram Protocol) is a connectionless protocol and doesn't guarantee data delivery. Traditionally, UDP did not have built-in congestion control, but modern applications that use UDP often implement their algorithms.
Token Bucket: The Token Bucket algorithm allows data packets to be sent at variable rates, with a maximum rate defined by the rate at which tokens are added to the bucket. If the bucket is empty, packets are dropped or delayed.
Leaky Bucket: Contrary to Token Bucket, the Leaky Bucket algorithm drains out packets at a constant rate, ensuring a smoother flow of data but potentially introducing some delay.
RFCs for reference
Emerging ideas (UDP)
QUIC: A new transport protocol that uses UDP but incorporates features of TCP, like reliable delivery and congestion control.
Custom Algorithms: Some applications implement their congestion control algorithms, tailored for specific needs like gaming or streaming.
Congestion Control in Video Conferencing
Now that you've got a solid grasp on the basics of TCP and UDP, let's dive into a real-world application where both these protocols shine and interact in complex ways: video conferencing. Whether it's a virtual business meeting, an online class, or a catch-up with friends, video conferencing demands low latency and fast data transmission. So how do TCP and UDP make this happen? by understanding their interaction, you'll appreciate the engineering marvel that allows us to have smooth, real-time video interactions from thousands of miles away.
The Numbers Game: Data speaks louder than words
Latency: In video conferencing, the round-trip time (RTT) should ideally be less than 150 milliseconds for smooth communication.
Bandwidth: HD video can require anywhere from 1 to 6 Mbps, depending on the quality and frame rate.
Packet Loss: A packet loss rate of less than 1% is generally acceptable for video conferencing.
The Role of TCP in Video Conferencing
Signaling: TCP is often used for signaling, the process of setting up, maintaining, and tearing down the video call. It ensures that all the preliminary data like user authentication, session initiation, and call setup are reliably exchanged.
File Sharing: If your video conferencing tool allows for file sharing, that's usually done over TCP to ensure that the files arrive intact and in order.
Text Chat: Side chats during a video conference are generally not latency-sensitive and benefit from the reliable delivery that TCP offers.
The Role of UDP in Video Conferencing
Audio and Video Streaming: UDP's low-latency characteristics make it the protocol of choice for transmitting audio and video data.
Real-Time Updates: Features like screen sharing or real-time annotations often use UDP to minimize lag.
Quality Adaptation: UDP allows the application to dynamically adjust the video quality based on network conditions, a feature known as Adaptive Bitrate Streaming.
How do they interact?
When you're using a video-conferencing application that employs both TCP for signalling and UDP for media transport, the congestion control algorithms for each protocol are both independently at work. This simultaneous operation can create a challenging landscape for maintaining network performance and fairness. Some possible ways to handle this situation.
TCP-Friendliness: A Balancing Act
The concept of "TCP-friendly" behaviour essentially translates to a fair distribution of network resources between TCP and UDP flows. To understand this better, let's bring in some numerical examples
TCP has a well-defined congestion control mechanism, which dynamically adjusts the rate of data transmission based on network conditions. A key parameter in this mechanism is the "congestion window" (cwnd), which determines the number of packets that can be sent out before an acknowledgement is received.
Here are some illustrative numbers for TCP's behaviour:
Initial State: TCP starts with a cwnd of typically 2-3 segments.
Slow Start Phase: The cwnd doubles every round-trip time (RTT) until it reaches a threshold.
Congestion Avoidance Phase: After reaching the threshold, cwnd increases linearly, typically by 1 segment every RTT.
When packet loss is detected (either through triple duplicate ACKs or a timeout), TCP assumes it's due to network congestion. It then reduces its cwnd, usually by half, to alleviate the congestion.
Unlike TCP, traditional UDP does not have built-in congestion control. This means that a UDP flow could, in theory, send data as fast as the application produces it and as much as the network allows it, without any regard for other flows.
Let's consider some numbers:
A UDP-based video stream might send data at a constant rate of 5 Mbps.
On a network with a total available bandwidth of 10 Mbps, this UDP flow alone could consume half the total bandwidth.
If multiple such UDP streams operate concurrently, or if other TCP-based applications are running, the available bandwidth might be completely consumed by these aggressive UDP flows.
The Solution: Making UDP "TCP-Friendly"
To prevent UDP from overwhelming a network, modern applications often implement a congestion control mechanism similar to TCP for their UDP flows. This "TCP-friendly" behaviour ensures that UDP doesn't unfairly dominate the bandwidth.
A "TCP-friendly" UDP flow might start transmitting at a lower data rate, e.g., 1 Mbps.
It would then monitor network feedback (like packet loss rate or RTT) to adjust its sending rate.
If no network congestion is detected, it might increase its rate by a certain percentage every RTT, similar to how TCP's cwnd grows during the Congestion Avoidance phase.
Upon detecting congestion (e.g., if packet loss exceeds 1% or RTT increases significantly), UDP employs Adaptive Rate Control to dynamically adjust the transmission rate, again mirroring TCP's.
The goal of these "TCP-friendly" UDP algorithms is to ensure that when both UDP and TCP flows are present, they each get roughly the same share of the network bandwidth. If both were to increase their sending rates simultaneously, they would both see the same packet loss rate and react similarly, ensuring fairness.
In-depth: Dual-Protocol Scenarios in Video Conferencing
When TCP and UDP work together in a video conferencing application, they each have their own roles and mechanisms for handling congestion. Let's consider two scenarios—high congestion and low congestion—to explore how these protocols co-exist and adapt in real-world situations.
Example 1: High Congestion Scenario
In this situation, let's assume the network is congested, with available bandwidth reduced to just 2 Mbps. Additionally, we're experiencing a high packet loss rate of around 3%.
Congestion Window Size: Initially, let's say TCP had a congestion window size of 16 KB. However, due to the high packet loss rate, it enters Congestion Avoidance mode and reduces its window size by half to 8 KB.
Data Transmission: With an 8 KB window size, TCP will only send that much data before waiting for an acknowledgement from the receiver. This ensures it doesn't contribute further to network congestion.
Video Quality: If initially streaming at 1080p requires 4 Mbps, UDP will detect the high packet loss and reduced bandwidth. It may then downgrade the video quality to 480p, which requires around 1 Mbps.
Adaptive Bitrate: The Adaptive Rate Control algorithm might even momentarily freeze the video to calculate the new bitrate and adjust the stream accordingly.
TCP-Friendly UDP: UDP's Adaptive Rate Control ensures it doesn't hog the already limited bandwidth, thereby being "TCP-friendly."
Resource Allocation: Both protocols are dynamically adjusting to ensure they use only a portion of the available 2 Mbps, thus sharing the limited resource.
Example 2: Low Congestion Scenario
In this situation, let's assume the network is performing well, with ample bandwidth of 10 Mbps and a packet loss rate of just 0.5%.
Congestion Window Size: TCP is already in Congestion Avoidance mode but senses the good network conditions. It may gradually increase its window size, say from 16 KB to 32 KB, to improve data transfer rates.
Data Transmission: With a larger window size, TCP can now send more data packets before waiting for acknowledgements, effectively making use of the available bandwidth.
Video Quality: Given the low packet loss and high bandwidth, UDP can comfortably maintain a high-quality 1080p video stream, requiring around 4 Mbps.
Adaptive Bitrate: Since the network conditions are stable, the Adaptive Rate Control algorithm maintains the current high-quality stream without needing any adjustments.x
Optimized Usage: Both TCP and UDP can optimize their data rates to make the best use of the available 10 Mbps bandwidth.
Resource Allocation: Each protocol dynamically adjusts its data rate, ensuring neither is starved of bandwidth or causing congestion.
By understanding these dual-protocol scenarios, you can see how TCP and UDP intelligently adapt their behaviour based on network conditions. This adaptability is crucial for applications like video conferencing, where both speed and reliability are key.
Here at Huddle01, we recognise that congestion control is more than just a set of algorithms; it's the backbone that supports every seamless video call, every uninterrupted meeting, and every crystal-clear audio transmission on our platform. Our tech team is always tinkering to make sure everything adapts well, even when your internet is acting up.
But hey, nobody's perfect, and sometimes the internet throws curveballs that even the best algorithms can't catch. Rest assured, we're always on the case, fine-tuning our system to give you the most seamless experience possible. So the next time your Huddle01 call just 'works,' know that there's some serious engineering magic making it all happen.
"Computer Networking: A Top-Down Approach" by James F. Kurose and Keith W. Ross - A comprehensive book that covers all aspects of networking, including congestion control algorithms.
Blogs and Papers:
Stanford University's "Introduction to Computer Networking" is an excellent resource for those who prefer video-based learning.