Re: [Bloat] [Rpm] [Make-wifi-fast] The most wonderful video ever about bufferbloat

2022-10-20 Thread Sebastian Moeller via Bloat
Hi Dave,


> On Oct 20, 2022, at 21:12, Dave Taht via Rpm  
> wrote:
> 
> On Thu, Oct 20, 2022 at 12:04 PM Bob McMahon via Make-wifi-fast
>  wrote:
>> 
>> Intel has a good analogous video on this with their CPU video here going 
>> over branches and failed predictions. And to Stuart's point, the longer 
>> pipelines make the forks worse in the amount of in-process bytes that need 
>> to be thrown away. Interactivity, in my opinion, suggests shrinking the 
>> pipeline because, with networks, there is no quick way to throw away stale 
>> data rather every forwarding device continues forward with invalid data. 
>> That's bad for the network too, spending resources on something that's no 
>> longer valid. We in the test & measurement community never measure this.
> 
> One of my all time favorite demos was of stuart's remote desktop
> scenario, where he moved the mouse and the window moved with it.

[SM] Fair enough. However in 2015 I had been using NX's remote X11 
desktop solution which even from Central Europe to California allowed me to 
remote control graphical applications way better than the first demo with the 
multi-second delay between mouse movement and resulting screen updates. (This 
was over a 6/1 Mbps ADSL link, admittedly using HTB-fq_codel, but since it did 
not saturate the link I assign the usability to NX's better design). I will 
make an impolite suggestion here, that the demonstrated screen sharing program 
simply had not yet been optimized/designed for longer slower paths... 

Regards
Sebastian


> 
>> There have been a few requests that iperf 2 measure the "bytes thrown away" 
>> per a fork (user moves a video pointer, etc.) I haven't come up with a good 
>> test yet. I'm still trying to get basic awareness about existing latency, 
>> OWD and responsiveness metrics. I do think measuring the amount of resources 
>> spent on stale data is sorta like food waste, few really pay attention to it.
>> 
>> Bob
>> 
>> FYI, iperf 2 supports TCP_NOTSENT_LOWAT for those interested.
>> 
>> --tcp-write-prefetch n[kmKM]
>> Set TCP_NOTSENT_LOWAT on the socket and use event based writes per select() 
>> on the socket.
>> 
>> 
>> On Thu, Oct 20, 2022 at 11:32 AM Stuart Cheshire via Make-wifi-fast 
>>  wrote:
>>> 
>>> On 20 Oct 2022, at 02:36, Sebastian Moeller  wrote:
>>> 
 Hi Stuart,
 
 [SM] That seems to be somewhat optimistic. We have been there before, 
 short of mandating actually-working oracle schedulers on all end-points, 
 intermediate hops will see queues some more and some less transient. So we 
 can strive to minimize queue build-up sure, but can not avoid queues and 
 long queues completely so we need methods to deal with them gracefully.
 Also not many applications are actually helped all that much by letting 
 information get stale in their own buffers as compared to an on-path 
 queue. Think an on-line reaction-time gated game, the need is to 
 distribute current world state to all participating clients ASAP.
>>> 
>>> I’m afraid you are wrong about this. If an on-line game wants low delay, 
>>> the only answer is for it to avoid generating position updates faster than 
>>> the network carry them. One packet giving the current game player position 
>>> is better than a backlog of ten previous stale ones waiting to go out. 
>>> Sending packets faster than the network can carry them does not get them to 
>>> the destination faster; it gets them there slower. The same applies to 
>>> frames in a screen sharing application. Sending the current state of the 
>>> screen *now* is better than having a backlog of ten previous stale frames 
>>> sitting in buffers somewhere on their way to the destination. Stale data is 
>>> not inevitable. Applications don’t need to have stale data if they avoid 
>>> generating stale data in the first place.
>>> 
>>> Please watch this video, which explains it better than I can in a written 
>>> email:
>>> 
>>> 
>>> 
>>> Stuart Cheshire
>>> 
>>> ___
>>> Make-wifi-fast mailing list
>>> make-wifi-f...@lists.bufferbloat.net
>>> https://lists.bufferbloat.net/listinfo/make-wifi-fast
>> 
>> 
>> This electronic communication and the information and any files transmitted 
>> with it, or attached to it, are confidential and are intended solely for the 
>> use of the individual or entity to whom it is addressed and may contain 
>> information that is confidential, legally privileged, protected by privacy 
>> laws, or otherwise restricted from disclosure to anyone else. If you are not 
>> the intended recipient or the person responsible for delivering the e-mail 
>> to the intended recipient, you are hereby notified that any use, copying, 
>> distributing, dissemination, forwarding, printing, or copying of this e-mail 
>> is strictly prohibited. If you received this e-mail in error, please return 
>> the e-mail to 

Re: [Bloat] [Rpm] [Make-wifi-fast] The most wonderful video ever about bufferbloat

2022-10-20 Thread Dave Taht via Bloat
On Thu, Oct 20, 2022 at 11:32 AM Stuart Cheshire  wrote:
>
> On 20 Oct 2022, at 02:36, Sebastian Moeller  wrote:
>
> > Hi Stuart,
> >
> > [SM] That seems to be somewhat optimistic. We have been there before, short 
> > of mandating actually-working oracle schedulers on all end-points, 
> > intermediate hops will see queues some more and some less transient. So we 
> > can strive to minimize queue build-up sure, but can not avoid queues and 
> > long queues completely so we need methods to deal with them gracefully.
> > Also not many applications are actually helped all that much by letting 
> > information get stale in their own buffers as compared to an on-path queue. 
> > Think an on-line reaction-time gated game, the need is to distribute 
> > current world state to all participating clients ASAP.
>
> I’m afraid you are wrong about this. If an on-line game wants low delay, the 
> only answer is for it to avoid generating position updates faster than the 
> network carry them. One packet giving the current game player position is 
> better than a backlog of ten previous stale ones waiting to go out. Sending 
> packets faster than the network can carry them does not get them to the 
> destination faster; it gets them there slower. The same applies to frames in 
> a screen sharing application. Sending the current state of the screen *now* 
> is better than having a backlog of ten previous stale frames sitting in 
> buffers somewhere on their way to the destination. Stale data is not 
> inevitable. Applications don’t need to have stale data if they avoid 
> generating stale data in the first place.

The core  of what you describe is that transports and applications are
evolving towards being delay aware, which is the primary outcome you
get from  FQ'd environment, be the FQs are physical (VoQs, LAGs,
multiple channels or subcarriers in wireless technologies) or virtual
(fq-codel, cake, fq-pie), so that the only source of congestion is
self-harm.

Everything from BBR to googles' gcc for videoconferencing, to recent
work on swift ( https://research.google/pubs/pub49448/ ) seems to be
pointing this way.

I'm also loving the work on reliable FQ detection for QUIC.
> Please watch this video, which explains it better than I can in a written 
> email:
>
> 
>
> Stuart Cheshire
>


-- 
This song goes out to all the folk that thought Stadia would work:
https://www.linkedin.com/posts/dtaht_the-mushroom-song-activity-698135607352320-FXtz
Dave Täht CEO, TekLibre, LLC
___
Bloat mailing list
Bloat@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/bloat


Re: [Bloat] [Make-wifi-fast] [Rpm] The most wonderful video ever about bufferbloat

2022-10-20 Thread Sebastian Moeller via Bloat
Hi Bob,

I think I agree, I also agree with the goal of keeping queues small to 
non-existent, all I am saying is that this is fine as a goal, but unrealistic 
as a reachable end-point. Queues in the network serve a purpose (actually 
multiple) and are not pure bloat. The trick is to keep the good properties 
while minimizing the bad. The way I put it it is:
over-sized and under0managed buffers/queues are bad, the solution is not to get 
rid of queues but to size them better and more importantly manage them better. 
Which will result in overall less queue delay, but critically not zero queue 
delay.

Regards
Sebastian


> On Oct 20, 2022, at 21:04, Bob McMahon  wrote:
> 
> Intel has a good analogous video on this with their CPU video here going over 
> branches and failed predictions. And to Stuart's point, the longer pipelines 
> make the forks worse in the amount of in-process bytes that need to be thrown 
> away. Interactivity, in my opinion, suggests shrinking the pipeline because, 
> with networks, there is no quick way to throw away stale data rather every 
> forwarding device continues forward with invalid data. That's bad for the 
> network too, spending resources on something that's no longer valid. We in 
> the test & measurement community never measure this.
> 
> There have been a few requests that iperf 2 measure the "bytes thrown away" 
> per a fork (user moves a video pointer, etc.) I haven't come up with a good 
> test yet. I'm still trying to get basic awareness about existing latency, OWD 
> and responsiveness metrics. I do think measuring the amount of resources 
> spent on stale data is sorta like food waste, few really pay attention to it.
> 
> Bob
> 
> FYI, iperf 2 supports TCP_NOTSENT_LOWAT for those interested. 
> 
> --tcp-write-prefetch n[kmKM]
> Set TCP_NOTSENT_LOWAT on the socket and use event based writes per select() 
> on the socket.
> 
> 
> On Thu, Oct 20, 2022 at 11:32 AM Stuart Cheshire via Make-wifi-fast 
>  wrote:
> On 20 Oct 2022, at 02:36, Sebastian Moeller  wrote:
> 
> > Hi Stuart,
> > 
> > [SM] That seems to be somewhat optimistic. We have been there before, short 
> > of mandating actually-working oracle schedulers on all end-points, 
> > intermediate hops will see queues some more and some less transient. So we 
> > can strive to minimize queue build-up sure, but can not avoid queues and 
> > long queues completely so we need methods to deal with them gracefully.
> > Also not many applications are actually helped all that much by letting 
> > information get stale in their own buffers as compared to an on-path queue. 
> > Think an on-line reaction-time gated game, the need is to distribute 
> > current world state to all participating clients ASAP.
> 
> I’m afraid you are wrong about this. If an on-line game wants low delay, the 
> only answer is for it to avoid generating position updates faster than the 
> network carry them. One packet giving the current game player position is 
> better than a backlog of ten previous stale ones waiting to go out. Sending 
> packets faster than the network can carry them does not get them to the 
> destination faster; it gets them there slower. The same applies to frames in 
> a screen sharing application. Sending the current state of the screen *now* 
> is better than having a backlog of ten previous stale frames sitting in 
> buffers somewhere on their way to the destination. Stale data is not 
> inevitable. Applications don’t need to have stale data if they avoid 
> generating stale data in the first place.
> 
> Please watch this video, which explains it better than I can in a written 
> email:
> 
> 
> 
> Stuart Cheshire
> 
> ___
> Make-wifi-fast mailing list
> make-wifi-f...@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/make-wifi-fast
> 
> This electronic communication and the information and any files transmitted 
> with it, or attached to it, are confidential and are intended solely for the 
> use of the individual or entity to whom it is addressed and may contain 
> information that is confidential, legally privileged, protected by privacy 
> laws, or otherwise restricted from disclosure to anyone else. If you are not 
> the intended recipient or the person responsible for delivering the e-mail to 
> the intended recipient, you are hereby notified that any use, copying, 
> distributing, dissemination, forwarding, printing, or copying of this e-mail 
> is strictly prohibited. If you received this e-mail in error, please return 
> the e-mail to the sender, delete it from your computer, and destroy any 
> printed copy of it.

___
Bloat mailing list
Bloat@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/bloat


Re: [Bloat] [Make-wifi-fast] [Rpm] The most wonderful video ever about bufferbloat

2022-10-20 Thread Bob McMahon via Bloat
The demo is nice but a way to measure this with full statistics can be
actionable by engineers. I did add support for tcp write time with
histograms, where setting TCP_NOTSENT_LOWAT, can give a sense of the
network responsiveness as the writes will await the select() indicating the
pipeline has drained. Nobody really uses this much.

Also, there is a suggestion for the server to generate branches so-to-speak
by sending an event back to the client, e.g. move the video pointer, but
how does the test tool decide when to create the user events that need to
be sent back? How long does it wait between events, etc?

Bob

On Thu, Oct 20, 2022 at 12:12 PM Dave Taht  wrote:

> On Thu, Oct 20, 2022 at 12:04 PM Bob McMahon via Make-wifi-fast
>  wrote:
> >
> > Intel has a good analogous video on this with their CPU video here going
> over branches and failed predictions. And to Stuart's point, the longer
> pipelines make the forks worse in the amount of in-process bytes that need
> to be thrown away. Interactivity, in my opinion, suggests shrinking the
> pipeline because, with networks, there is no quick way to throw away stale
> data rather every forwarding device continues forward with invalid data.
> That's bad for the network too, spending resources on something that's no
> longer valid. We in the test & measurement community never measure this.
>
> One of my all time favorite demos was of stuart's remote desktop
> scenario, where he moved the mouse and the window moved with it.
>
> > There have been a few requests that iperf 2 measure the "bytes thrown
> away" per a fork (user moves a video pointer, etc.) I haven't come up with
> a good test yet. I'm still trying to get basic awareness about existing
> latency, OWD and responsiveness metrics. I do think measuring the amount of
> resources spent on stale data is sorta like food waste, few really pay
> attention to it.
> >
> > Bob
> >
> > FYI, iperf 2 supports TCP_NOTSENT_LOWAT for those interested.
> >
> > --tcp-write-prefetch n[kmKM]
> > Set TCP_NOTSENT_LOWAT on the socket and use event based writes per
> select() on the socket.
> >
> >
> > On Thu, Oct 20, 2022 at 11:32 AM Stuart Cheshire via Make-wifi-fast <
> make-wifi-f...@lists.bufferbloat.net> wrote:
> >>
> >> On 20 Oct 2022, at 02:36, Sebastian Moeller  wrote:
> >>
> >> > Hi Stuart,
> >> >
> >> > [SM] That seems to be somewhat optimistic. We have been there before,
> short of mandating actually-working oracle schedulers on all end-points,
> intermediate hops will see queues some more and some less transient. So we
> can strive to minimize queue build-up sure, but can not avoid queues and
> long queues completely so we need methods to deal with them gracefully.
> >> > Also not many applications are actually helped all that much by
> letting information get stale in their own buffers as compared to an
> on-path queue. Think an on-line reaction-time gated game, the need is to
> distribute current world state to all participating clients ASAP.
> >>
> >> I’m afraid you are wrong about this. If an on-line game wants low
> delay, the only answer is for it to avoid generating position updates
> faster than the network carry them. One packet giving the current game
> player position is better than a backlog of ten previous stale ones waiting
> to go out. Sending packets faster than the network can carry them does not
> get them to the destination faster; it gets them there slower. The same
> applies to frames in a screen sharing application. Sending the current
> state of the screen *now* is better than having a backlog of ten previous
> stale frames sitting in buffers somewhere on their way to the destination.
> Stale data is not inevitable. Applications don’t need to have stale data if
> they avoid generating stale data in the first place.
> >>
> >> Please watch this video, which explains it better than I can in a
> written email:
> >>
> >> 
> >>
> >> Stuart Cheshire
> >>
> >> ___
> >> Make-wifi-fast mailing list
> >> make-wifi-f...@lists.bufferbloat.net
> >> https://lists.bufferbloat.net/listinfo/make-wifi-fast
> >
> >
> > This electronic communication and the information and any files
> transmitted with it, or attached to it, are confidential and are intended
> solely for the use of the individual or entity to whom it is addressed and
> may contain information that is confidential, legally privileged, protected
> by privacy laws, or otherwise restricted from disclosure to anyone else. If
> you are not the intended recipient or the person responsible for delivering
> the e-mail to the intended recipient, you are hereby notified that any use,
> copying, distributing, dissemination, forwarding, printing, or copying of
> this e-mail is strictly prohibited. If you received this e-mail in error,
> please return the e-mail to the sender, delete it from your computer, and
> destroy any printed copy of
> it.

Re: [Bloat] [Make-wifi-fast] [Rpm] The most wonderful video ever about bufferbloat

2022-10-20 Thread Dave Taht via Bloat
On Thu, Oct 20, 2022 at 12:04 PM Bob McMahon via Make-wifi-fast
 wrote:
>
> Intel has a good analogous video on this with their CPU video here going over 
> branches and failed predictions. And to Stuart's point, the longer pipelines 
> make the forks worse in the amount of in-process bytes that need to be thrown 
> away. Interactivity, in my opinion, suggests shrinking the pipeline because, 
> with networks, there is no quick way to throw away stale data rather every 
> forwarding device continues forward with invalid data. That's bad for the 
> network too, spending resources on something that's no longer valid. We in 
> the test & measurement community never measure this.

One of my all time favorite demos was of stuart's remote desktop
scenario, where he moved the mouse and the window moved with it.

> There have been a few requests that iperf 2 measure the "bytes thrown away" 
> per a fork (user moves a video pointer, etc.) I haven't come up with a good 
> test yet. I'm still trying to get basic awareness about existing latency, OWD 
> and responsiveness metrics. I do think measuring the amount of resources 
> spent on stale data is sorta like food waste, few really pay attention to it.
>
> Bob
>
> FYI, iperf 2 supports TCP_NOTSENT_LOWAT for those interested.
>
> --tcp-write-prefetch n[kmKM]
> Set TCP_NOTSENT_LOWAT on the socket and use event based writes per select() 
> on the socket.
>
>
> On Thu, Oct 20, 2022 at 11:32 AM Stuart Cheshire via Make-wifi-fast 
>  wrote:
>>
>> On 20 Oct 2022, at 02:36, Sebastian Moeller  wrote:
>>
>> > Hi Stuart,
>> >
>> > [SM] That seems to be somewhat optimistic. We have been there before, 
>> > short of mandating actually-working oracle schedulers on all end-points, 
>> > intermediate hops will see queues some more and some less transient. So we 
>> > can strive to minimize queue build-up sure, but can not avoid queues and 
>> > long queues completely so we need methods to deal with them gracefully.
>> > Also not many applications are actually helped all that much by letting 
>> > information get stale in their own buffers as compared to an on-path 
>> > queue. Think an on-line reaction-time gated game, the need is to 
>> > distribute current world state to all participating clients ASAP.
>>
>> I’m afraid you are wrong about this. If an on-line game wants low delay, the 
>> only answer is for it to avoid generating position updates faster than the 
>> network carry them. One packet giving the current game player position is 
>> better than a backlog of ten previous stale ones waiting to go out. Sending 
>> packets faster than the network can carry them does not get them to the 
>> destination faster; it gets them there slower. The same applies to frames in 
>> a screen sharing application. Sending the current state of the screen *now* 
>> is better than having a backlog of ten previous stale frames sitting in 
>> buffers somewhere on their way to the destination. Stale data is not 
>> inevitable. Applications don’t need to have stale data if they avoid 
>> generating stale data in the first place.
>>
>> Please watch this video, which explains it better than I can in a written 
>> email:
>>
>> 
>>
>> Stuart Cheshire
>>
>> ___
>> Make-wifi-fast mailing list
>> make-wifi-f...@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/make-wifi-fast
>
>
> This electronic communication and the information and any files transmitted 
> with it, or attached to it, are confidential and are intended solely for the 
> use of the individual or entity to whom it is addressed and may contain 
> information that is confidential, legally privileged, protected by privacy 
> laws, or otherwise restricted from disclosure to anyone else. If you are not 
> the intended recipient or the person responsible for delivering the e-mail to 
> the intended recipient, you are hereby notified that any use, copying, 
> distributing, dissemination, forwarding, printing, or copying of this e-mail 
> is strictly prohibited. If you received this e-mail in error, please return 
> the e-mail to the sender, delete it from your computer, and destroy any 
> printed copy of it.___
> Make-wifi-fast mailing list
> make-wifi-f...@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/make-wifi-fast



-- 
This song goes out to all the folk that thought Stadia would work:
https://www.linkedin.com/posts/dtaht_the-mushroom-song-activity-698135607352320-FXtz
Dave Täht CEO, TekLibre, LLC
___
Bloat mailing list
Bloat@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/bloat


Re: [Bloat] [Make-wifi-fast] [Rpm] The most wonderful video ever about bufferbloat

2022-10-20 Thread Bob McMahon via Bloat
Intel has a good analogous video on this with their CPU video here
 going over branches and failed
predictions. And to Stuart's point, the longer pipelines make the forks
worse in the amount of in-process bytes that need to be thrown away.
Interactivity, in my opinion, suggests shrinking the pipeline because, with
networks, there is no quick way to throw away stale data rather every
forwarding device continues forward with invalid data. That's bad for the
network too, spending resources on something that's no longer valid. We in
the test & measurement community never measure this.

There have been a few requests that iperf 2 measure the "bytes thrown away"
per a fork (user moves a video pointer, etc.) I haven't come up with a good
test yet. I'm still trying to get basic awareness about existing latency,
OWD and responsiveness metrics. I do think measuring the amount of
resources spent on stale data is sorta like food waste, few really pay
attention to it.

Bob

FYI, iperf 2 supports TCP_NOTSENT_LOWAT for those interested.

--tcp-write-prefetch n[kmKM]
Set TCP_NOTSENT_LOWAT on the socket and use event based writes per select()
on the socket.


On Thu, Oct 20, 2022 at 11:32 AM Stuart Cheshire via Make-wifi-fast <
make-wifi-f...@lists.bufferbloat.net> wrote:

> On 20 Oct 2022, at 02:36, Sebastian Moeller  wrote:
>
> > Hi Stuart,
> >
> > [SM] That seems to be somewhat optimistic. We have been there before,
> short of mandating actually-working oracle schedulers on all end-points,
> intermediate hops will see queues some more and some less transient. So we
> can strive to minimize queue build-up sure, but can not avoid queues and
> long queues completely so we need methods to deal with them gracefully.
> > Also not many applications are actually helped all that much by letting
> information get stale in their own buffers as compared to an on-path queue.
> Think an on-line reaction-time gated game, the need is to distribute
> current world state to all participating clients ASAP.
>
> I’m afraid you are wrong about this. If an on-line game wants low delay,
> the only answer is for it to avoid generating position updates faster than
> the network carry them. One packet giving the current game player position
> is better than a backlog of ten previous stale ones waiting to go out.
> Sending packets faster than the network can carry them does not get them to
> the destination faster; it gets them there slower. The same applies to
> frames in a screen sharing application. Sending the current state of the
> screen *now* is better than having a backlog of ten previous stale frames
> sitting in buffers somewhere on their way to the destination. Stale data is
> not inevitable. Applications don’t need to have stale data if they avoid
> generating stale data in the first place.
>
> Please watch this video, which explains it better than I can in a written
> email:
>
> 
>
> Stuart Cheshire
>
> ___
> Make-wifi-fast mailing list
> make-wifi-f...@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/make-wifi-fast

-- 
This electronic communication and the information and any files transmitted 
with it, or attached to it, are confidential and are intended solely for 
the use of the individual or entity to whom it is addressed and may contain 
information that is confidential, legally privileged, protected by privacy 
laws, or otherwise restricted from disclosure to anyone else. If you are 
not the intended recipient or the person responsible for delivering the 
e-mail to the intended recipient, you are hereby notified that any use, 
copying, distributing, dissemination, forwarding, printing, or copying of 
this e-mail is strictly prohibited. If you received this e-mail in error, 
please return the e-mail to the sender, delete it from your computer, and 
destroy any printed copy of it.


smime.p7s
Description: S/MIME Cryptographic Signature
___
Bloat mailing list
Bloat@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/bloat


Re: [Bloat] [Rpm] [Make-wifi-fast] The most wonderful video ever about bufferbloat

2022-10-20 Thread Stuart Cheshire via Bloat
On 20 Oct 2022, at 02:36, Sebastian Moeller  wrote:

> Hi Stuart,
> 
> [SM] That seems to be somewhat optimistic. We have been there before, short 
> of mandating actually-working oracle schedulers on all end-points, 
> intermediate hops will see queues some more and some less transient. So we 
> can strive to minimize queue build-up sure, but can not avoid queues and long 
> queues completely so we need methods to deal with them gracefully.
> Also not many applications are actually helped all that much by letting 
> information get stale in their own buffers as compared to an on-path queue. 
> Think an on-line reaction-time gated game, the need is to distribute current 
> world state to all participating clients ASAP.

I’m afraid you are wrong about this. If an on-line game wants low delay, the 
only answer is for it to avoid generating position updates faster than the 
network carry them. One packet giving the current game player position is 
better than a backlog of ten previous stale ones waiting to go out. Sending 
packets faster than the network can carry them does not get them to the 
destination faster; it gets them there slower. The same applies to frames in a 
screen sharing application. Sending the current state of the screen *now* is 
better than having a backlog of ten previous stale frames sitting in buffers 
somewhere on their way to the destination. Stale data is not inevitable. 
Applications don’t need to have stale data if they avoid generating stale data 
in the first place.

Please watch this video, which explains it better than I can in a written email:



Stuart Cheshire

___
Bloat mailing list
Bloat@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/bloat


Re: [Bloat] [Rpm] [Make-wifi-fast] Traffic analogies (was: Wonderful video)

2022-10-20 Thread Rich Brown via Bloat


> On Oct 19, 2022, at 7:36 PM, Stephen Hemminger via Rpm 
>  wrote:
> 
> Grocery store analogies also breakdown because packets are not "precious"
> it is okay to drop packets. A lot of AQM works by doing "drop early and often"
> instead of "drop late and collapse".

Another problem is that grocery store customers are individual flows in their 
own right - not correlated with each other. Why is my grocery cart any more (or 
less) important than all the others who're waiting?

I continue to cast about for intuitive analogies (and getting skunked each 
time). But I'm going to try again...

Imagine a company with a bunch of employees. (Or a sports venue, or a UPS depot 
- any location where a bunch of vehicles with similar interests all decide to 
travel at once.) At quitting time, everyone leaves the parking lot where a 
traffic cop controls entry onto a two-lane road. 

If there isn't any traffic on that road, the traffic cop keeps people coming 
out of the driveway "at the maximum rate".

If a car approaches on the road, what's the fair strategy for letting that 
single car pass? Wait 'til the parking lot empties? Make them wait 5 minutes? 
Make them wait one minute? It seems clear to me that it's fairest to stop 
traffic right away, let the car pass, then resume the driveway traffic.

This has the advantage of distinguishing between new flows (the single car) and 
bulk flows (treating vehicles in the driveway as a single flow). But it also 
feels like QoS prioritization or a simple two-queue model, neither of which 
lead to the proper intuition. 

Any "traffic" analogy also ignores people's very real (and correct) intuition 
that "cars have mass". They can't stop in an instant and need to maintain space 
between them. This also ignores the recently-stated reality (for routers, at 
least) that "The best queue is no queue at all..."

Is there any hope of tweaking this analogy? :-)

Thanks.

Rich___
Bloat mailing list
Bloat@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/bloat


Re: [Bloat] [Rpm] [Make-wifi-fast] The most wonderful video ever about bufferbloat

2022-10-20 Thread Sebastian Moeller via Bloat
Hi Stuart,


> On Oct 19, 2022, at 22:44, Stuart Cheshire via Rpm 
>  wrote:
> 
> On Mon, Oct 17, 2022 at 5:02 PM Stuart Cheshire  wrote:
> 
>> Accuracy be damned. The analogy to common experience resonates more.
> 
> I feel it is not an especially profound insight to observe that, “people 
> don’t like waiting in line.” The conclusion, “therefore privileged people 
> should get to go to the front,” describes an airport first class checkin 
> counter, Disney Fastpass, and countless other analogies from everyday life, 
> all of which are the wrong solution for packets in a network.
> 
>> I think the person with the cheetos pulling out a gun and shooting everyone 
>> in front of him (AQM) would not go down well.
> 
> Which is why starting with a bad analogy (people waiting in a grocery store) 
> inevitably leads to bad conclusions.
> 
> If we want to struggle to make the grocery store analogy work, perhaps we 
> show people checking some grocery store app on their smartphone before they 
> leave home, and if they see that a long line is beginning to form they wait 
> until later, when the line is shorter. The challenge is not how to deal with 
> a long queue when it’s there, it is how to avoid a long queue in the first 
> place.

[SM] That seems to be somewhat optimistic. We have been there before, short of 
mandating actually-working oracle schedulers on all end-points, intermediate 
hops will see queues some more and some less transient. So we can strive to 
minimize queue build-up sure, but can not avoid queues and long queues 
completely so we need methods to deal with them gracefully.
Also not many applications are actually helped all that much by letting 
information get stale in their own buffers as compared to an on-path queue. 
Think an on-line reaction-time gated game, the need is to distribute current 
world state to all participating clients ASAP. That often means a bunch of 
packets that can not reasonably be held back by the server to pace them out as 
world state IIUC needs to be transmitted completely for clients to be able to 
actually do the right thing. Such an application will continue to dump its 
world state burtst per client into the network as that is the required mode of 
operation. I think that there are other applications with similar requirements 
which will make sure that traffic stays burtsy and that IMHO will cause 
transient queues to build up. (Probably short duration ones, but still).



> 
>> Actually that analogy is fairly close to fair queuing. The multiple checker 
>> analogy is one of the most common analogies in queue theory itself.
> 
> I disagree. You are describing the “FQ” part of FQ_CoDel. It’s the “CoDel” 
> part of FQ_CoDel that solves bufferbloat. FQ has been around for a long time, 
> and at best it partially masked the effects of bufferbloat. Having more 
> queues does not solve bufferbloat. Managing the queue(s) better solves 
> bufferbloat.

[SM] Yes and no. IMHO it is the FQ part that gets greedy traffic off 
the back of those flows that stay below their capacity share, as it (unless 
overloaded) will isolate the consequence of exceeding one's capacity share to 
the flow(s) doing so. The AQM part then helps for greedy traffic not to congest 
itself unduly.
So for quite a lot of application classes (e.g. my world-state 
distribution example above) FQ (or any other type of competent scheduling) will 
already solve most of the problem, heck if ubiquitious it would even allow 
greedy traffic to switch to delay based CC methods that can help keeping queues 
small even without competent AQM at the bottlenecks (not that I 
recommend/endorse that, I am all for competent AQM/scheduling at the 
bottlenecks*).



> 
>> I like the idea of a guru floating above a grocery cart with a better string 
>> of explanations, explaining
>> 
>>  - "no, grasshopper, the solution to bufferbloat is no line... at all".
> 
> That is the kind of thing I had in mind. Or a similar quote from The Matrix. 
> While everyone is debating ways to live with long queues, the guru asks, 
> “What if there were no queues?” That is the “mind blown” realization.

[SM] However the "no queues" state is generally not achievable nor 
would it be desirable; queues have utility as "shock absorbers" and to help 
keeping a link busy***. I admit though that "no oversized queues" is far less 
snappy.


Regards
Sebastian


*) Which is why I am vehemently opposed to L4S, it offers neither competent 
scheduling nor competent AQM, in both regimes it is admittedly better then the 
current status quo of having neither but it falls short of the state of the art 
in both so much that deploying L4S today seems indefensible on technical 
grounds. And lo and behold one of L4S biggest proponents does so mainly on 
ideological grounds (just read "Flow rate fairness dismantling a religion" 
https://dl.acm.org/doi/10.1145/1232919.1232926 and then ask yourself whether 
you should trust such an

Re: [Bloat] A quick report from the WISPA conference

2022-10-20 Thread Sebastian Moeller via Bloat
Hi Sina,

On 20 October 2022 07:15:47 CEST, Sina Khanifar  wrote:
>Hi Sebastian,
>
>> 
>> [SM] Just an observation, using Safari I see large maximal delays (like a
>> small group of samples far out to the right of the bulk) for both down-
>> and upload that essentially disappear when I switch to firefox. Now I tend
>> to have a ton of tabs open in Safari while I only open firefox for
>> dedicated use-cases with a few tabs at most, so I do not intend to throw
>> shade on Safari here; my point is more browsers can and do affect the
>> reported latency numbers, of you want to be able to test this, maybe ask
>> users to use the OS browser (safari, edge, konqueror ;) ) as well as
>> firefox and chrome so you can directly compare across browsers?
>> 
>
>I believe this is because we use the WebTiming APIs to get a more accurate 
>latency numbers, but the API isn't fully supported on Safari. As such, latency 
>measurements in Safari are much less accurate than in Firefox and Chrome.
>
>> 
>> traceroute/mtr albeit not sure how well this approach works from inside
>> the browser, can you e.g. control TTL and do you receive error messages
>> via ICMP?
>> 
>
>Unfortunately traceroutes via the browser don't really work :(. And I don't 
>believe we can control TTL or see ICMP error messages either, though I haven't 
>dug into this very deeply.
>
>> 
>> 
>> 
>> Over in the OpenWrt forum we often see that server performance with
>> iperf2/3 or netperf on a router is not all that representative for its
>> routing performance. What do you expect to deduce from upload/download to
>> the router? (I might misunderstand your point by a mile, if so please
>> elaborate)
>> 
>> 
>> 
>
>The goal would be to test the "local" latency, throughput, and bufferbloat 
>between the user's device and the router, and then compare this with the 
>latency, throughput, and bufferbloat when DL/ULing to a remote server.
>
>This would reveal whether the dominant source of increase in latency under 
>load is at the router's WAN interface or somewhere between the router and the 
>user (e.g. WiFi, ethernet, powerline, Moca devices, PtP connections, etc).
>
>Being able to test the user-to-router leg of the connection would be helpful 
>more broadly beyond just bufferbloat. I often want to diagnose whether my 
>connection issues or speed drops are happening due to an issue with my modem 
>(and more generally the WAN connection) or if it's an issue with my wifi 
>connection.
>
>I guess I don't quite understand this part though: "iperf2/3 or netperf on a 
>router is not all that representative for its routing performance." What 
>exactly do you mean here?


[SM] IIRC some router SoC allow higher routing throughput than they can sink or 
source bulk traffic with iperf. This is especially pronounced in routers that 
use soft- and especially hardware acceleration and where the access speed is 
already beyond what the CPU can deliver on its own, iperf being not accelerated 
will tend to show throughput below the routing capacity making interpretation 
of problems somewhat hard. IIUC similar issues arise for big iron routers that 
pair a potent routing ASIC with a somewhat measly CPU, all task that get 
relegated to the CPU make the router appear much slower.
I guess you would need a second iperf server in your network and configure your 
router such that it has to route packets between server and client to get an 
idea what routing limits your router actually has. And even that is somewhat 
incomplete if say the ISP uses additional costly layers like PPPoE on the 
access ling that the router needs to terminate.

Regards
Sebastian



>
>> 
>> ​Most recent discussion moved over to 
>> https://forum.openwrt.org/t/cake-w-adaptive-bandwidth/135379
>> 
>> 
>> 
>
>Thanks! I have a lot of catching up to do on that thread, and some of it is 
>definitely above my pay grade :).
>
>> 
>> ​ I think this ideally would be solved at the 3GPPP level
>> 
>> 
>
>Agreed. I wonder if there's anything we can do to encourage them to pay 
>attention to this.
>
>Best regards,
>
>Sina.
>
>On Tue, Oct 18, 2022 at 12:04 PM, Sebastian Moeller < moell...@gmx.de > wrote:
>
>> 
>> 
>> 
>> Hi Sina,
>> 
>> 
>> 
>> 
>> On 18 October 2022 19:17:16 CEST, Sina Khanifar via Bloat < bloat@ lists. 
>> bufferbloat.
>> net ( bloat@lists.bufferbloat.net ) > wrote:
>> 
>> 
>> 
>>> 
 
 
 I can't help but wonder tho... are you collecting any statistics, over
 time, as to how much better the problem is getting?
 
 
 
>>> 
>>> 
>>> 
>>> We are collecting anonymized data, but we haven't analyzed it yet. If we
>>> get a bit of time we'll look at that hopefully.
>>> 
>>> 
>>> 
>> 
>> 
>> 
>> [SM] Just an observation, using Safari I see large maximal delays (like a
>> small group of samples far out to the right of the bulk) for both down-
>> and upload that essentially disappear when I switch to firefox. Now I tend
>> to have a ton of tabs open in Safari while I only open fi