On Sat, Apr 01, 2006 at 10:39:26AM +0100, tony sarendal wrote:
> 
> In my case (aslo on crappy UK broadband)

You should try it in NZ, 128k upstream!

> 1454 is actually optimal.
> On the dsl part of the link my connection runs the Ethernet frames over ATM,
> so I get this nice pancake when crossing the pvc:
> 
> ATM/AAL5/Ethernet/PPPoE/PPP/IP

I doubt that there would be PPPoE in there, if your ISP resells std BT
ADSL. You may want to see: http://www.sinet.bt.com/386v2p0.pdf

> 
> Unless the IP packet is smaller than 38 bytes I have 34 bytes of overhead
> before
> splitting up into ATM cells.
> 
> If I were to use MTU 1458 that would make that 1458+34=1492 bytes.
> 1492 bytes will require 32 atm cells,32*53=1696 bytes.
> 1696/1458 = 16.3% overhead.
> 
> Now if I would use MTU 1454.
> 1454+34=1488
> 1488 bytes require 31 atm cells=1643 bytes
> 1643/1488=10.4% overhead.

Cool stuff.

> 
> Note that this is of course overhead on top of IP, not application.
> 
> On a side note I modifed the traffic shaper in PF to understand the real
> overhead
> of my dsl link, so I can now set my shaper to 280kbps (ATM PVC 288kbps) and
> my QoS config works great no matter what the IP packet size is.
> Before I could get packet loss on the pvc even if I had the shaper set to
> 160kbps
> simply due to awesome overhead at smaller packet sizes.
> 
> Example: TCP ACK=40 bytes IP
> That will require 2 ATM cells for me.
> 106/40 = a whopping 165% overhead on top of IP while crossing the ATM link.
> 
> I think I'm going to add GFP (EoSDH) and a few others here.
> 
> Enough ranting, time to feed the kids.

To the labrador?

Reply via email to