Hi Hang!

> On 24 Jul 2022, at 23:57, shihang (C) <[email protected]> wrote:
> 
> "Rate control" reminds me of the application layer video codec bitrate 
> control.

Why is that a problem? Even though it’s at a higher layer, it’s a form of 
congestion control.


> "Performance maximizer" is too broad, including both the app layer and 
> transport layer algorithm. 

> 
> Maybe the congestion control is a good name.

No, it really isn’t (and in fact hasn’t been, for a long time: the prevalent 
problem of CC. has been underutilization since the days of Cubic’s inception), 
because it makes people think “congestion doesn’t exist, so I don’t need this”.


> Think of it as that the transport layer controls/avoids the congestion for 
> the application. The symptoms of the congestion are packet loss and/or long 
> queuing delay which *may not* be desirable for the application. If the 
> application works fine with the presence of the congestion, then they do not 
> need the congestion control.

That’s wrong. Such an application can harm everyone else. Even CBR applications 
should at least have a circuit breaker.


> Otherwise they need to do something to "control the congestion" because if 
> they don't and send as fast as network interface speed, they will experience 
> congestion in the bottleneck somewhere along the path. And the congestion may 
> also be caused by other bottleneck-sharing flows which sends too fast. The 
> means to control the congestion is adjusting the sending rate in the form of 
> send window or pacing rate. The name congestion control talks about the goal 
> of the algorithm and sending rate control talks about the means to achieve 
> the goal.

But it’s not just “congestion” that this is about - an application which never 
uses feedback about the path will also never learn how fast it *could* have 
sent. Sending as fast as allowed, thereby cutting transfer completion time 
(thereby reducing latency and energy wastage!) is what good CC can give to an 
application, and this is completely hidden by the current name of this 
functionality.

Indeed, CC on the Internet (as opposed to datacenters, e.g. with mechanisms 
such as HPCC!) today doesn’t do very much to give feedback to applications 
before they start exceeding the maximum capacity, but if we want to fix this 
some day, then changing the name might be a good start too. Having a name that 
refers to a mostly non-existent problem makes the whole topic stale.

Cheers,
Michael



> 
> Best,
> Hang Shi
> 
> -----Original Message-----
> From: tsv-area <[email protected]> On Behalf Of Michael Welzl
> Sent: Monday, July 25, 2022 5:30 AM
> To: Bless, Roland (TM) <[email protected]>
> Cc: Toerless Eckert <[email protected]>; Stuart Cheshire 
> <[email protected]>; [email protected]
> Subject: Re: At TSVAREA in Philadelphia: The Future of Congestion Control at 
> IETF (and a new name for it?)
> 
> 
> 
>> On Jul 24, 2022, at 2:50 PM, Bless, Roland (TM) <[email protected]> wrote:
>> 
>> Hi Michael,
>> 
>> see below.
>> 
>> On 24.07.22 at 12:12 Michael Welzl wrote:
>>>> On Jul 24, 2022, at 3:03 AM, Toerless Eckert <[email protected]> wrote:
>>>> 
>>>> Inline
>>>> 
>>>> On Sat, Jul 23, 2022 at 08:10:43PM -0400, Stuart Cheshire wrote:
>>>>> I feel that in retrospect the name “congestion control” was a poor 
>>>>> choice. Too often when I talk to people building their own home-grown 
>>>>> transport protocol on top of UDP, and I ask them what congestion control 
>>>>> algorithm they use, they smile smugly and say, “We don’t need congestion 
>>>>> control.” They explain that their protocol won’t be used on congested 
>>>>> networks.
>>> I agree SO strongly !!!!!
>>> The main problem of congestion control these days appears to be that 
>>> networks are mostly underutilized (see the thread on ICCRG I started by 
>>> pointing at our ComMag paper) - the issue is to increase the rate as 
>>> quickly as possible, without producing congestion.
>> 
>> Now, what is congestion in this context? Packet loss or too high queuing 
>> delay?
> 
> I’d say both, but -
> 
> 
>>> It should really be called “rate control”.
>>> It’s about a sending rate - whether that is indirectly achieved by 
>>> controlling a window or explicitly by changing a rate in bits per second 
>>> doesn’t really matter.
>> 
>> Yes, one would assume that, but technically it does matter.
> 
> - you seem to misunderstand. I wasn’t suggesting that “rate control is the 
> only way, window control is just the same thing”. Not at all !
> I’m saying that if you do it using a window, or an explicit send rate in 
> bits/second or packets per whatever time unit …  either way, you’re always in 
> fact controlling a rate, and hence “rate control” is a good term for it. 
> Window based control is just tighter, because:
> 
> 
>> Typically, you want to control the sending rate _and_ the amount of 
>> inflight data. Window-based approaches have the advantage that they 
>> are kind of self-stabilizing due to the implicit rate feedback by the 
>> growing effective RTT (queueing delay induced).
>> Moreover, if your congestion window is estimated too large, you just 
>> get a constant amount of excess data in the bottleneck queue, whereas 
>> if your sending rate is too large, you get a growing amount of 
>> inflight data over time and thus increasing excess data in the bottleneck 
>> queue.
>> Even if a rate-based algorithm would know and pick the perfect rate it 
>> may result in instability at the bottleneck (c.f. queueing theory at 
>> \rho=1). Controlling the sending rate is thus harder and more fragile.
>> Therefore, I'm a strong proponent of having a window-based approach 
>> that uses pacing in addition to avoid micro-bursts and cope with 
>> unsteady/distorted ACK feedback. The other way around would also work 
>> and that's how BBRv2 currently tries to approach the
>> problem: using a rate-based sender that also controls the amount of 
>> inflight data.
> 
> ….yes, yes, of course, all that!  But it’s not really what I meant though  :)
> 
> Cheers,
> Michael
> 

Reply via email to