Jonathan Newton over at Vodafone Group made some interesting observations about what happens when two of the optimal congestion controllers interact through a shared FIFO queue:
If we take two flows E and F; E is 90% of bandwidth and F 10%; the time > for the congestion signal to reach the sender for each flow is dE and dF > where dE is 10ms and dF 100ms. We assume no prioritisation so they must > share the same buffer at X, and therefore share the same peak transient > delay. > > We have an event at t0 where the bandwidth is halved. > > For time t0 to t0+dE (the first 10ms), the total rate transmitted by both > sources is twice (90%*10%)/50% the output rate, so the component of the > peak delay of this part is (2-1)*dE = 10ms > > For time t0+dE to t0+Fd (the next 90ms), the total rate transmitted by > both sources is (45%+10%)/50% of the output rate, so the component of the > peak delay of this part is (1.1-1)*dF-dE = 9ms > > Making the peak transient delay 19ms. > This implies that moving some senders closer to the edge (e.g. CDNs) might help reduce lag spikes for everyone. It also implies that slow-responding flows can have a very big impact on the peak transient latency seen by rapidly responding flows. If the bandwidth sharing ratio in the above example is 50/50, then the peak transient delay will be 55 ms, seen by both flows. For flow E that's a big increase from the 10 ms we'd expect if flow E was alone. For C=3 the increase is even worse, with flow E going from 20ms to 100ms when sharing the link with flow F. - Bjørn On Tue, 2 Nov 2021 at 14:08, Dave Taht <dave.t...@gmail.com> wrote: > I am very pre-coffee. Something that could build on this would involve > FQ. More I cannot say, til more coffee. > > On Tue, Nov 2, 2021 at 3:56 AM Bjørn Ivar Teigen <bj...@domos.no> wrote: > > > > Hi everyone, > > > > I've recently published a paper on Arxiv which is relevant to the > Bufferbloat problem. I hope it will be helpful in convincing AQM doubters. > > Discussions at the recent IAB workshop inspired me to write a detailed > argument for why end-to-end methods cannot avoid latency spikes. I couldn't > find this argument in the literature. > > > > Here is the Arxiv link: https://arxiv.org/abs/2111.00488 > > > > A direct consequence is that we need AQMs at all points in the internet > where congestion is likely to happen, even for short periods, to mitigate > the impact of latency spikes. Here I am assuming we ultimately want an > Internet without lag-spikes, not just low latency on average. > > > > Hope you find this interesting! > > > > -- > > Bjørn Ivar Teigen > > Head of Research > > +47 47335952 | bj...@domos.no | www.domos.no > > WiFi Slicing by Domos > > _______________________________________________ > > Bloat mailing list > > Bloat@lists.bufferbloat.net > > https://lists.bufferbloat.net/listinfo/bloat > > > > -- > I tried to build a better future, a few times: > https://wayforward.archive.org/?site=https%3A%2F%2Fwww.icei.org > > Dave Täht CEO, TekLibre, LLC > -- Bjørn Ivar Teigen Head of Research +47 47335952 | bj...@domos.no <n...@domos.no> | www.domos.no WiFi Slicing by Domos
_______________________________________________ Bloat mailing list Bloat@lists.bufferbloat.net https://lists.bufferbloat.net/listinfo/bloat