Re: [Bloat] A quick report from the WISPA conference

2022-10-19 Thread Sina Khanifar via Bloat
Hi Sebastian,

> 
> [SM] Just an observation, using Safari I see large maximal delays (like a
> small group of samples far out to the right of the bulk) for both down-
> and upload that essentially disappear when I switch to firefox. Now I tend
> to have a ton of tabs open in Safari while I only open firefox for
> dedicated use-cases with a few tabs at most, so I do not intend to throw
> shade on Safari here; my point is more browsers can and do affect the
> reported latency numbers, of you want to be able to test this, maybe ask
> users to use the OS browser (safari, edge, konqueror ;) ) as well as
> firefox and chrome so you can directly compare across browsers?
> 

I believe this is because we use the WebTiming APIs to get a more accurate 
latency numbers, but the API isn't fully supported on Safari. As such, latency 
measurements in Safari are much less accurate than in Firefox and Chrome.

> 
> traceroute/mtr albeit not sure how well this approach works from inside
> the browser, can you e.g. control TTL and do you receive error messages
> via ICMP?
> 

Unfortunately traceroutes via the browser don't really work :(. And I don't 
believe we can control TTL or see ICMP error messages either, though I haven't 
dug into this very deeply.

> 
> 
> 
> Over in the OpenWrt forum we often see that server performance with
> iperf2/3 or netperf on a router is not all that representative for its
> routing performance. What do you expect to deduce from upload/download to
> the router? (I might misunderstand your point by a mile, if so please
> elaborate)
> 
> 
> 

The goal would be to test the "local" latency, throughput, and bufferbloat 
between the user's device and the router, and then compare this with the 
latency, throughput, and bufferbloat when DL/ULing to a remote server.

This would reveal whether the dominant source of increase in latency under load 
is at the router's WAN interface or somewhere between the router and the user 
(e.g. WiFi, ethernet, powerline, Moca devices, PtP connections, etc).

Being able to test the user-to-router leg of the connection would be helpful 
more broadly beyond just bufferbloat. I often want to diagnose whether my 
connection issues or speed drops are happening due to an issue with my modem 
(and more generally the WAN connection) or if it's an issue with my wifi 
connection.

I guess I don't quite understand this part though: "iperf2/3 or netperf on a 
router is not all that representative for its routing performance." What 
exactly do you mean here?

> 
> ​Most recent discussion moved over to 
> https://forum.openwrt.org/t/cake-w-adaptive-bandwidth/135379
> 
> 
> 

Thanks! I have a lot of catching up to do on that thread, and some of it is 
definitely above my pay grade :).

> 
> ​ I think this ideally would be solved at the 3GPPP level
> 
> 

Agreed. I wonder if there's anything we can do to encourage them to pay 
attention to this.

Best regards,

Sina.

On Tue, Oct 18, 2022 at 12:04 PM, Sebastian Moeller < moell...@gmx.de > wrote:

> 
> 
> 
> Hi Sina,
> 
> 
> 
> 
> On 18 October 2022 19:17:16 CEST, Sina Khanifar via Bloat < bloat@ lists. 
> bufferbloat.
> net ( bloat@lists.bufferbloat.net ) > wrote:
> 
> 
> 
>> 
>>> 
>>> 
>>> I can't help but wonder tho... are you collecting any statistics, over
>>> time, as to how much better the problem is getting?
>>> 
>>> 
>>> 
>> 
>> 
>> 
>> We are collecting anonymized data, but we haven't analyzed it yet. If we
>> get a bit of time we'll look at that hopefully.
>> 
>> 
>> 
> 
> 
> 
> [SM] Just an observation, using Safari I see large maximal delays (like a
> small group of samples far out to the right of the bulk) for both down-
> and upload that essentially disappear when I switch to firefox. Now I tend
> to have a ton of tabs open in Safari while I only open firefox for
> dedicated use-cases with a few tabs at most, so I do not intend to throw
> shade on Safari here; my point is more browsers can and do affect the
> reported latency numbers, of you want to be able to test this, maybe ask
> users to use the OS browser (safari, edge, konqueror ;) ) as well as
> firefox and chrome so you can directly compare across browsers?
> 
> 
> 
>> 
>>> 
>>> 
>>> And any chance they could do something similar explaining wifi?
>>> 
>>> 
>>> 
>> 
>> 
>> 
>> I'm actually not exactly sure what mitigations exist for WiFi at the
>> moment - is there something I can read?
>> 
>> 
>> 
>> 
>> On this note: when we were building our test one of the things we really
>> wished existed was a standardized way to test latency and throughput to
>> routers.
>> 
>> 
>> 
> 
> 
> 
> [SM] traceroute/mtr albeit not sure how well this approach works from
> inside the browser, can you e.g. control TTL and do you receive error
> messages via ICMP?
> 
> 
> 
> 
> It would be super helpful if there was a standard in consumer routers that
> allowed users to both ping and fetch 0kB fils from their routers, and also
> run download/upload tests.
> 
> 
> 
> 
> [SM] I 

Re: [Bloat] [Make-wifi-fast] The most wonderful video ever about bufferbloat

2022-10-19 Thread Stephen Hemminger via Bloat
On Wed, 19 Oct 2022 14:33:28 -0700 (PDT)
David Lang via Bloat  wrote:

> On Wed, 19 Oct 2022, Stuart Cheshire via Bloat wrote:
> 
> > On Mon, Oct 17, 2022 at 5:02 PM Stuart Cheshire  wrote:
> >  
> >> Accuracy be damned. The analogy to common experience resonates more.  
> >
> > I feel it is not an especially profound insight to observe that, “people 
> > don’t like waiting in line.” The conclusion, “therefore privileged people 
> > should get to go to the front,” describes an airport first class checkin 
> > counter, Disney Fastpass, and countless other analogies from everyday life, 
> > all of which are the wrong solution for packets in a network.  
> 
> the 'privileged go first' is traditional QoS, and it can work to some extent, 
> but is a nightmare to maintain and gets the wrong result most of the time.

A lot of times when this is proposed it has some business/political motivation.
It is like "priority boarding" for Global Services customers.
Not solving a latency problem, instead making stakeholders happy.

> AQM (fw_codel and cake) are more the 'cash only line' and '15 items or less' 
> line, they speed up the things that can be fast a LOT, while not 
> significantly 
> slowing down the people with a full baskets (but in the process, it shortens 
> the 
> lines for those people with full baskets)
> 
> >> I think the person with the cheetos pulling out a gun and shooting 
> >> everyone in front of him (AQM) would not go down well.  
> >
> > Which is why starting with a bad analogy (people waiting in a grocery 
> > store) inevitably leads to bad conclusions.
> >
> > If we want to struggle to make the grocery store analogy work, perhaps we 
> > show 
> > people checking some grocery store app on their smartphone before they 
> > leave 
> > home, and if they see that a long line is beginning to form they wait until 
> > later, when the line is shorter. The challenge is not how to deal with a 
> > long 
> > queue when it’s there, it is how to avoid a long queue in the first place.  
> 
> only somewhat, you aren't going to have people deciding not to click on a 
> link 
> because the network is busy, and if you did try to go that direction, I would 
> fight you. the prioritization is happening at a much lower level, which is 
> hard 
> to put into an analogy
> 
> even with the 'slowing' of bulk traffic, no traffic is prevented, it's just 
> that 
> they aren't allowed to monopolize the links.
> 
> This is where the grocery store analogy is weak, the reality would be more 
> like 
> 'the cashier will only process 30 items before you have to step aside and let 
> someone else in', but since no store operates that way, it would be a bad 
> analogy.

Grocery store analogies also breakdown because packets are not "precious"
it is okay to drop packets. A lot of AQM works by doing "drop early and often"
instead of "drop late and collapse".

> 
> >> Actually that analogy is fairly close to fair queuing. The multiple 
> >> checker analogy is one of the most common analogies in queue theory 
> >> itself.  
> >
> > I disagree. You are describing the “FQ” part of FQ_CoDel. It’s the “CoDel” 
> > part of FQ_CoDel that solves bufferbloat. FQ has been around for a long 
> > time, 
> > and at best it partially masked the effects of bufferbloat. Having more 
> > queues 
> > does not solve bufferbloat. Managing the queue(s) better solves bufferbloat.
> >  
> >> I like the idea of a guru floating above a grocery cart with a better 
> >> string of explanations, explaining
> >>
> >>   - "no, grasshopper, the solution to bufferbloat is no line... at all".  
> >
> > That is the kind of thing I had in mind. Or a similar quote from The 
> > Matrix. 
> > While everyone is debating ways to live with long queues, the guru asks, 
> > “What 
> > if there were no queues?” That is the “mind blown” realization.  
> 
> In a world where there is no universal scheduler (and no universal knowlege 
> to 
> base any scheduling decisions on), and where you are going to have malicious 
> actors trying to get more than their fair share, you can't rely on voluntary 
> actions to eliminate the lines.
> 
> There are data transportation apps that work by starting up a large number of 
> connections in parallel for the highest transfer speeds (shortening slow 
> start, 
> reducing the impact of lost packets as they only affect one connection, etc). 
> This isn't even malicious actors, but places like Hollywood studios sending 
> the raw movie footage around over dedicated leased lines and wanting to get 
> every bps of bandwidth that they are paying for used.
> 
> David Lang

___
Bloat mailing list
Bloat@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/bloat


[Bloat] netflix's concurrency limits

2022-10-19 Thread Dave Taht via Bloat
uses a vegas or gradient-like controller:

https://github.com/Netflix/concurrency-limits

-- 
This song goes out to all the folk that thought Stadia would work:
https://www.linkedin.com/posts/dtaht_the-mushroom-song-activity-698135607352320-FXtz
Dave Täht CEO, TekLibre, LLC
___
Bloat mailing list
Bloat@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/bloat


Re: [Bloat] [Make-wifi-fast] The most wonderful video ever about bufferbloat

2022-10-19 Thread Michael Richardson via Bloat

Stuart Cheshire via Bloat  wrote:
>> I think the person with the cheetos pulling out a gun and shooting
>> everyone in front of him (AQM) would not go down well.

> Which is why starting with a bad analogy (people waiting in a grocery
> store) inevitably leads to bad conclusions.

> If we want to struggle to make the grocery store analogy work, perhaps
> we show people checking some grocery store app on their smartphone
> before they leave home, and if they see that a long line is beginning
> to form they wait until later, when the line is shorter. The challenge
> is not how to deal with a long queue when it’s there, it is how to
> avoid a long queue in the first place.

Maybe if we regard the entire grocery store as the "pipe", then we would
realize that the trick to reducing checkout lines is to move the constraint
from exiting, to entering the store :-)

Then the different times you are in the store because you have different
amounts of shopping to do, etc. and you get txt messages from spouse to
remember to pick up X, and that somehow is an analogy to the various
"PowerBoost" cable and LTE/5G systems that provide for inconsistent
bandwidth.

(There are various pushes to actually do this, as the experience from COVID
was that having fewer people in the store pleased many people.)


--
Michael Richardson , Sandelman Software Works
 -= IPv6 IoT consulting =-





signature.asc
Description: PGP signature
___
Bloat mailing list
Bloat@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/bloat


Re: [Bloat] [Cake] [Rpm] [Make-wifi-fast] The most wonderful video ever about bufferbloat

2022-10-19 Thread David Lang via Bloat
Thanks, and how long does it take to transmit the wifi header (at 1Mb/s and at 
11Mb/s)? That's also airtime that's not availalbe to transmit user data.


And then compare that to the time it takes to transmit a 1500 byte ethernet 
packet worth of data over a 160MHz wide channel


Going back to SM's question, there is per-transmission overhead that you want to 
amatorize across multiple ethernet packets, not pay for each packet.


David Lang

On Wed, 19 Oct 2022, David P. Reed wrote:


4 microseconds!

On Wednesday, October 19, 2022 3:23pm, "David Lang via Cake" 
 said:




you have to listen and hear nothing for some timeframe before you transmit, that
listening time is define in the standard. (isn't it??)

David Lang

On Wed, 19 Oct 2022, Bob McMahon wrote:

> I'm not sure where the gap in milliseconds is coming from. EDCA gaps are
> mostly driven by probabilities
> . If
> energy detect (ED) indicates the medium is available then the gap prior to
> transmit, assuming no others competing & winning at that moment in time, is
> driven by AIFS and the CWMIN - CWMAX back offs which are simple probability
> distributions. Things change a bit with 802.11ax and trigger frames but the
> gap is still determined by the backoff and should be less than milliseconds
> per that. Things like NAVs will impact the gap too but that happens when
> another is transmitting.
>
>
> [image: image.png]
>
> Agreed that the PLCP preamble is at low MCS and the payload can be orders
> of magnitude greater (per different QAM encodings and other signal
> processing techniques.)
>
> Bob
>
> On Wed, Oct 19, 2022 at 12:09 AM David Lang  wrote:
>
>> On Tue, 18 Oct 2022, Sebastian Moeller wrote:
>>> Hi Bob,
>>>
 Many network engineers typically, though incorrectly, perceive a
>> transmit
 unit as one ethernet packet. With WiFi it's one Mu transmission
or one
>> Su
 transmission, with aggregation(s), which is a lot more than one
ethernet
 packet but it depends on things like MCS, spatial stream powers,
Mu
>> peers,
 etc. and is variable. Some data center designs have optimized the
 forwarding plane for flow completion times so their equivalent
transmit
 unit is a mouse flow.
>>>
>>> [SM] Is this driven more by the need to aggregate packets to amortize
>> some cost over a larger payload or to reduce the scheduling overhead or
to
>> regularize things (as in fixed size DTUs used in DSL with G.INP
>> retransmissions)?
>>
>> it's to amortize costs over a larger payload.
>>
>> the gap between transmissions is in ms, and the transmission header is
>> transmitted at a slow data rate (both for backwards compatibility with
>> older
>> equipment that doesn't know about the higher data rate modulations)
>>
>> For a long time, the transmission header was transmitted at 1Mb (which is
>> still
>> the default in most equipment), but there is now an option to no longer
>> support
>> 802.11b equipment, which raises the header transmission time to 11Mb.
>>
>> These factors are so imbalanced compared to the top data rates available
>> that
>> you need to transmit several MB of data to have actual data use 50% of
the
>> airtime.
>>
>> David Lang
>>
>
>
___
Cake mailing list
c...@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cake


___
Bloat mailing list
Bloat@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/bloat


Re: [Bloat] [Make-wifi-fast] The most wonderful video ever about bufferbloat

2022-10-19 Thread David Lang via Bloat

On Wed, 19 Oct 2022, Stuart Cheshire via Bloat wrote:


On Mon, Oct 17, 2022 at 5:02 PM Stuart Cheshire  wrote:


Accuracy be damned. The analogy to common experience resonates more.


I feel it is not an especially profound insight to observe that, “people don’t 
like waiting in line.” The conclusion, “therefore privileged people should get 
to go to the front,” describes an airport first class checkin counter, Disney 
Fastpass, and countless other analogies from everyday life, all of which are 
the wrong solution for packets in a network.


the 'privileged go first' is traditional QoS, and it can work to some extent, 
but is a nightmare to maintain and gets the wrong result most of the time.


AQM (fw_codel and cake) are more the 'cash only line' and '15 items or less' 
line, they speed up the things that can be fast a LOT, while not significantly 
slowing down the people with a full baskets (but in the process, it shortens the 
lines for those people with full baskets)



I think the person with the cheetos pulling out a gun and shooting everyone in 
front of him (AQM) would not go down well.


Which is why starting with a bad analogy (people waiting in a grocery store) 
inevitably leads to bad conclusions.

If we want to struggle to make the grocery store analogy work, perhaps we show 
people checking some grocery store app on their smartphone before they leave 
home, and if they see that a long line is beginning to form they wait until 
later, when the line is shorter. The challenge is not how to deal with a long 
queue when it’s there, it is how to avoid a long queue in the first place.


only somewhat, you aren't going to have people deciding not to click on a link 
because the network is busy, and if you did try to go that direction, I would 
fight you. the prioritization is happening at a much lower level, which is hard 
to put into an analogy


even with the 'slowing' of bulk traffic, no traffic is prevented, it's just that 
they aren't allowed to monopolize the links.


This is where the grocery store analogy is weak, the reality would be more like 
'the cashier will only process 30 items before you have to step aside and let 
someone else in', but since no store operates that way, it would be a bad 
analogy.



Actually that analogy is fairly close to fair queuing. The multiple checker 
analogy is one of the most common analogies in queue theory itself.


I disagree. You are describing the “FQ” part of FQ_CoDel. It’s the “CoDel” 
part of FQ_CoDel that solves bufferbloat. FQ has been around for a long time, 
and at best it partially masked the effects of bufferbloat. Having more queues 
does not solve bufferbloat. Managing the queue(s) better solves bufferbloat.



I like the idea of a guru floating above a grocery cart with a better string of 
explanations, explaining

  - "no, grasshopper, the solution to bufferbloat is no line... at all".


That is the kind of thing I had in mind. Or a similar quote from The Matrix. 
While everyone is debating ways to live with long queues, the guru asks, “What 
if there were no queues?” That is the “mind blown” realization.


In a world where there is no universal scheduler (and no universal knowlege to 
base any scheduling decisions on), and where you are going to have malicious 
actors trying to get more than their fair share, you can't rely on voluntary 
actions to eliminate the lines.


There are data transportation apps that work by starting up a large number of 
connections in parallel for the highest transfer speeds (shortening slow start, 
reducing the impact of lost packets as they only affect one connection, etc). 
This isn't even malicious actors, but places like Hollywood studios sending 
the raw movie footage around over dedicated leased lines and wanting to get 
every bps of bandwidth that they are paying for used.


David Lang___
Bloat mailing list
Bloat@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/bloat


Re: [Bloat] [Cake] [Rpm] [Make-wifi-fast] The most wonderful video ever about bufferbloat

2022-10-19 Thread David P. Reed via Bloat

4 microseconds!
 
On Wednesday, October 19, 2022 3:23pm, "David Lang via Cake" 
 said:



> you have to listen and hear nothing for some timeframe before you transmit, 
> that
> listening time is define in the standard. (isn't it??)
> 
> David Lang
> 
> On Wed, 19 Oct 2022, Bob McMahon wrote:
> 
> > I'm not sure where the gap in milliseconds is coming from. EDCA gaps are
> > mostly driven by probabilities
> > . If
> > energy detect (ED) indicates the medium is available then the gap prior to
> > transmit, assuming no others competing & winning at that moment in time, is
> > driven by AIFS and the CWMIN - CWMAX back offs which are simple probability
> > distributions. Things change a bit with 802.11ax and trigger frames but the
> > gap is still determined by the backoff and should be less than milliseconds
> > per that. Things like NAVs will impact the gap too but that happens when
> > another is transmitting.
> >
> >
> > [image: image.png]
> >
> > Agreed that the PLCP preamble is at low MCS and the payload can be orders
> > of magnitude greater (per different QAM encodings and other signal
> > processing techniques.)
> >
> > Bob
> >
> > On Wed, Oct 19, 2022 at 12:09 AM David Lang  wrote:
> >
> >> On Tue, 18 Oct 2022, Sebastian Moeller wrote:
> >>> Hi Bob,
> >>>
>  Many network engineers typically, though incorrectly, perceive a
> >> transmit
>  unit as one ethernet packet. With WiFi it's one Mu transmission
> or one
> >> Su
>  transmission, with aggregation(s), which is a lot more than one
> ethernet
>  packet but it depends on things like MCS, spatial stream powers,
> Mu
> >> peers,
>  etc. and is variable. Some data center designs have optimized the
>  forwarding plane for flow completion times so their equivalent
> transmit
>  unit is a mouse flow.
> >>>
> >>> [SM] Is this driven more by the need to aggregate packets to amortize
> >> some cost over a larger payload or to reduce the scheduling overhead or
> to
> >> regularize things (as in fixed size DTUs used in DSL with G.INP
> >> retransmissions)?
> >>
> >> it's to amortize costs over a larger payload.
> >>
> >> the gap between transmissions is in ms, and the transmission header is
> >> transmitted at a slow data rate (both for backwards compatibility with
> >> older
> >> equipment that doesn't know about the higher data rate modulations)
> >>
> >> For a long time, the transmission header was transmitted at 1Mb (which is
> >> still
> >> the default in most equipment), but there is now an option to no longer
> >> support
> >> 802.11b equipment, which raises the header transmission time to 11Mb.
> >>
> >> These factors are so imbalanced compared to the top data rates available
> >> that
> >> you need to transmit several MB of data to have actual data use 50% of
> the
> >> airtime.
> >>
> >> David Lang
> >>
> >
> >
> ___
> Cake mailing list
> c...@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/cake
> ___
Bloat mailing list
Bloat@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/bloat


Re: [Bloat] [Make-wifi-fast] The most wonderful video ever about bufferbloat

2022-10-19 Thread Stuart Cheshire via Bloat
On Mon, Oct 17, 2022 at 5:02 PM Stuart Cheshire  wrote:

> Accuracy be damned. The analogy to common experience resonates more.

I feel it is not an especially profound insight to observe that, “people don’t 
like waiting in line.” The conclusion, “therefore privileged people should get 
to go to the front,” describes an airport first class checkin counter, Disney 
Fastpass, and countless other analogies from everyday life, all of which are 
the wrong solution for packets in a network.

> I think the person with the cheetos pulling out a gun and shooting everyone 
> in front of him (AQM) would not go down well.

Which is why starting with a bad analogy (people waiting in a grocery store) 
inevitably leads to bad conclusions.

If we want to struggle to make the grocery store analogy work, perhaps we show 
people checking some grocery store app on their smartphone before they leave 
home, and if they see that a long line is beginning to form they wait until 
later, when the line is shorter. The challenge is not how to deal with a long 
queue when it’s there, it is how to avoid a long queue in the first place.

> Actually that analogy is fairly close to fair queuing. The multiple checker 
> analogy is one of the most common analogies in queue theory itself.

I disagree. You are describing the “FQ” part of FQ_CoDel. It’s the “CoDel” part 
of FQ_CoDel that solves bufferbloat. FQ has been around for a long time, and at 
best it partially masked the effects of bufferbloat. Having more queues does 
not solve bufferbloat. Managing the queue(s) better solves bufferbloat.

> I like the idea of a guru floating above a grocery cart with a better string 
> of explanations, explaining
> 
>   - "no, grasshopper, the solution to bufferbloat is no line... at all".

That is the kind of thing I had in mind. Or a similar quote from The Matrix. 
While everyone is debating ways to live with long queues, the guru asks, “What 
if there were no queues?” That is the “mind blown” realization.

Stuart Cheshire

___
Bloat mailing list
Bloat@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/bloat


Re: [Bloat] [Rpm] [Make-wifi-fast] The most wonderful video ever about bufferbloat

2022-10-19 Thread David Lang via Bloat
you have to listen and hear nothing for some timeframe before you transmit, that 
listening time is define in the standard. (isn't it??)


David Lang

On Wed, 19 Oct 2022, Bob McMahon wrote:


I'm not sure where the gap in milliseconds is coming from. EDCA gaps are
mostly driven by probabilities
. If
energy detect (ED) indicates the medium is available then the gap prior to
transmit, assuming no others competing & winning at that moment in time, is
driven by AIFS and the CWMIN - CWMAX back offs which are simple probability
distributions. Things change a bit with 802.11ax and trigger frames but the
gap is still determined by the backoff and should be less than milliseconds
per that. Things like NAVs will impact the gap too but that happens when
another is transmitting.


[image: image.png]

Agreed that the PLCP preamble is at low MCS and the payload can be orders
of magnitude greater (per different QAM encodings and other signal
processing techniques.)

Bob

On Wed, Oct 19, 2022 at 12:09 AM David Lang  wrote:


On Tue, 18 Oct 2022, Sebastian Moeller wrote:

Hi Bob,


Many network engineers typically, though incorrectly, perceive a

transmit

unit as one ethernet packet. With WiFi it's one Mu transmission or one

Su

transmission, with aggregation(s), which is a lot more than one ethernet
packet but it depends on things like MCS, spatial stream powers, Mu

peers,

etc. and is variable. Some data center designs have optimized the
forwarding plane for flow completion times so their equivalent transmit
unit is a mouse flow.


[SM] Is this driven more by the need to aggregate packets to amortize

some cost over a larger payload or to reduce the scheduling overhead or to
regularize things (as in fixed size DTUs used in DSL with G.INP
retransmissions)?

it's to amortize costs over a larger payload.

the gap between transmissions is in ms, and the transmission header is
transmitted at a slow data rate (both for backwards compatibility with
older
equipment that doesn't know about the higher data rate modulations)

For a long time, the transmission header was transmitted at 1Mb (which is
still
the default in most equipment), but there is now an option to no longer
support
802.11b equipment, which raises the header transmission time to 11Mb.

These factors are so imbalanced compared to the top data rates available
that
you need to transmit several MB of data to have actual data use 50% of the
airtime.

David Lang





___
Bloat mailing list
Bloat@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/bloat


Re: [Bloat] [Rpm] [Make-wifi-fast] The most wonderful video ever about bufferbloat

2022-10-19 Thread David Lang via Bloat

On Tue, 18 Oct 2022, Sebastian Moeller wrote:

Hi Bob,


Many network engineers typically, though incorrectly, perceive a transmit
unit as one ethernet packet. With WiFi it's one Mu transmission or one Su
transmission, with aggregation(s), which is a lot more than one ethernet
packet but it depends on things like MCS, spatial stream powers, Mu peers,
etc. and is variable. Some data center designs have optimized the
forwarding plane for flow completion times so their equivalent transmit
unit is a mouse flow.


[SM] Is this driven more by the need to aggregate packets to amortize some cost 
over a larger payload or to reduce the scheduling overhead or to regularize 
things (as in fixed size DTUs used in DSL with G.INP retransmissions)?


it's to amortize costs over a larger payload.

the gap between transmissions is in ms, and the transmission header is 
transmitted at a slow data rate (both for backwards compatibility with older 
equipment that doesn't know about the higher data rate modulations)


For a long time, the transmission header was transmitted at 1Mb (which is still 
the default in most equipment), but there is now an option to no longer support 
802.11b equipment, which raises the header transmission time to 11Mb.


These factors are so imbalanced compared to the top data rates available that 
you need to transmit several MB of data to have actual data use 50% of the 
airtime.


David Lang
___
Bloat mailing list
Bloat@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/bloat