Hi Sebastian,

> Le 17 nov. 2022 à 10:50, Sebastian Moeller <moell...@gmx.de> a écrit :
> 
> Hi T.
> 
> 
> so taking your proposa under consideration I canged the section that threw 
> you off course to read:
> 
> 
>       • Ethernet with Overhead: SQM can also account for the overhead imposed 
> by VDSL2 links - add 22 bytes of overhead (mpu 68). Cable Modems (DOCSIS) set 
> both up- and downstream overhead to 18 bytes (6 bytes source MAC, 6 bytes 
> destination MAC, 2 bytes ether-type, 4 bytes FCS), to allow for a possible 4 
> byte VLAN tag it is recommended to set the overhead to 18 + 4 = 22 (mpu 64). 
> For FTTH the answer is less clear cut, since different underlaying 
> technologies have different relevant per-packet-overheads; however 
> underestimating the per-packet-overhead is considerably worse for 
> responsiveness than (gently) overestimating it, so for FTTH set the overhead 
> to 44 (mpu 84) unless there is more detailed information about the true 
> overhead on a link available.
>       • None: All shaping below the physical gross-rate of a link requires 
> correct per-packet overhead accounting to be precise, so None is only useful 
> if approximate shaping is sufficient, say if you want to clamp a guest 
> network to at best ~50% of the available capacity or similar tasks, but even 
> then configuring an approximate correct per-packet-overhead is recommended 
> (overhead 44 (mpu 84) is a decent default to pick).
> 
> 
> I hope this is explicit enough.

Yes this looks a lot better, thank you.

Although I must confess that it certainly feels counter-intuitive that for 
ethernet (and FTTH) we suggest a higher overhead than e.g. VDSL2/cable (which 
themselves run off an ethernet interface).

I would also like to see some info about ppp vs ethernet interface in there 
(matching your previous email): unless you beat me to it I will add it.
I also think the « details » page needs to be reformatted a bit, it’s very 
dense and relevant info is all over the place and not very well organized. I’ll 
try to get around improving that.

Thanks
_______________________________________________
Cake mailing list
Cake@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cake

Reply via email to