Hi ECN-sane,

I reduced the CC list as I do not expect this to further the discussion much, 
but since I feel I invested too much time reading the L4S RFCs and papers I 
still want to air this here to get some feedback.


> On Mar 20, 2019, at 21:55, Greg White <g.wh...@cablelabs.com> wrote:
> 
> In normal conditions, L4S offers "Maximize Throughput" +  "Minimize Loss" + 
> "Minimize Latency" all at once.  It doesn't require an application to have to 
> make that false choice (hence the name "Low Latency Low Loss Scalable 
> throughput").  
> 
> If an application would prefer to "Minimize Cost", then I suppose it could 
> adjust its congestion control to be less aggressive (assuming it is 
> congestion controlled). Also, as you point out the LEPHB could be an option 
> as well.
> 
> What section 4.1 in the dualq draft is referring to is a case where the 
> system needs to protect against unresponsive, overloading flows in the low 
> latency queue.   In that case something has to give (you can't ensure low 
> latency & low loss to e.g. a 100 Mbps unresponsive flow arriving at a 50 Mbps 
> bottleneck).

        Which somewhat puts the claim "ultra-low queueing latency" (see 
https://tools.ietf.org/html/draft-ietf-tsvwg-l4s-arch-03) into perspective. 
IMHO, ultra low queueing latency is only going to happen in steady state if 
effectively pacing senders spread their sending rates such that packets arrive 
nicely spread out already. I note that with that kind of traffic pattern other 
AQ would also offer ultra-low queueing latency... I note that "‘Data Centre to 
the Home’: Ultra-Low Latency for All" states that they see 20ms queue delay 
with a 7ms base link delay @ 40 Mbps, back of the envelope calculations tell us 
that at that rate ((40 * 1000^2) / (1538 * 8)) * 0.02 = 65 packets will be 
paced out of the dualpi2 AQM add 65 more on the L4S side and queuing delay will 
be at 40ms. Switching to a pacing and less aggressive TCP version helps to 
smooth out the steady-stae bursts, but will do zilch for transients due to an 
increase in the number of active flows. 

I wonder how this is going to behave once we have new flows come in at a high 
rate (at  3.251 KHz capacity adding loads at 100 Hz does not seem like that a 
heavy load to me especially given)  (actually I don't wonder, the dual-AQM 
draft indicates that the queue will grow to 250ms and tail dropping will 
start). If I would be responsible at IETFI really would want to see some 
analysis of resistance against adversarial traffic patterns before going that 
route, especially in the light of the fuzzy classification vie ECT(1).

Best Regards
        Sebastian


> 
> -Greg
> 
> 
> 
> 
> On 3/20/19, 2:05 PM, "Bloat on behalf of Jonathan Morton" 
> <bloat-boun...@lists.bufferbloat.net on behalf of chromati...@gmail.com> 
> wrote:
> 
>> On 20 Mar, 2019, at 9:39 pm, Gorry Fairhurst <go...@erg.abdn.ac.uk> wrote:
>> 
>> Concerning "Maximize Throughput", if you don't need scalability to very high 
>> rates, then is your requirement met by TCP-like semantics, as in TCP with 
>> SACK/loss or even better TCP with ABE/ECT(0)?
> 
>    My intention with "Maximise Throughput" is to support the bulk-transfer 
> applications that TCP is commonly used for today.  In Diffserv terminology, 
> you may consider it equivalent to "Best Effort".
> 
>    As far as I can see, L4S offers "Maximise Throughput" and "Minimise 
> Latency" services, but not the other two.
> 
>     - Jonathan Morton
> 
>    _______________________________________________
>    Bloat mailing list
>    Bloat@lists.bufferbloat.net
>    https://lists.bufferbloat.net/listinfo/bloat
> 
> 
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat

_______________________________________________
Bloat mailing list
Bloat@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/bloat

Reply via email to