Re: [LARTC] HTB ATM MPU OVERHEAD (without any patching)

2005-04-12 Thread Andy Furniss
Andy Furniss wrote:
on leafs and burst/cburst 10b on bulk classes let's me never delay an 
interactive packet longer than the transmit time of a bulk size packet.
Ii should really say almost never here as htb uses DRR so if there are 
different sized packets in a stream then it can release more than one 
packet at a time.

Andy.
___
LARTC mailing list
LARTC@mailman.ds9a.nl
http://mailman.ds9a.nl/cgi-bin/mailman/listinfo/lartc


Re: [LARTC] HTB ATM MPU OVERHEAD (without any patching)

2005-04-12 Thread Andy Furniss
Chris Bennett wrote:
Good recommendation.  I read Jesper's thesis (well, okay, not ALL of 
it... but the juicy bits) and it looks like the difference between the 
overhead value that I expected to work (24) and the overhead value that 
actually worked (50) can be explained by the fact that I neglected to 
include the overhead incurred by bridged mode over ATM (RFC 2684/1483).

I would say "now I can sleep peacefully", but I just woke up a couple of 
hours ago... so I'll go for a run instead ;)
LOL - but unless you add your overhead + enough to cover padding you 
still may be taken down if the packet size is just right.

If you know your overhead now it will be far less wastefull to patch - 
use jesper's and do htb aswell as iproute2.

I have patches which are far uglier and lazier, but do the same thing.
Andy.
___
LARTC mailing list
LARTC@mailman.ds9a.nl
http://mailman.ds9a.nl/cgi-bin/mailman/listinfo/lartc


Re: [LARTC] HTB ATM MPU OVERHEAD (without any patching)

2005-04-12 Thread Chris Bennett
Good recommendation.  I read Jesper's thesis (well, okay, not ALL of it... 
but the juicy bits) and it looks like the difference between the overhead 
value that I expected to work (24) and the overhead value that actually 
worked (50) can be explained by the fact that I neglected to include the 
overhead incurred by bridged mode over ATM (RFC 2684/1483).

I would say "now I can sleep peacefully", but I just woke up a couple of 
hours ago... so I'll go for a run instead ;)

- Original Message - 
From: "Andy Furniss" <[EMAIL PROTECTED]>
To: "Chris Bennett" <[EMAIL PROTECTED]>
Cc: 
Sent: Tuesday, April 12, 2005 12:29 PM
Subject: Re: [LARTC] HTB ATM MPU OVERHEAD (without any patching)

HTB uses IP packet length, to which you then need to add your fixed 
overhead - for pppoe that may include eth header + other have a look at 
jesper's table in his thesis.

http://www.adsl-optimizer.dk/
___
LARTC mailing list
LARTC@mailman.ds9a.nl
http://mailman.ds9a.nl/cgi-bin/mailman/listinfo/lartc


Re: [LARTC] HTB ATM MPU OVERHEAD (without any patching)

2005-04-12 Thread Chris Bennett
Thanks!  Very prescient of you, since my latest test results prove exactly 
what you said about needing a higher overhead value!  :)

- Original Message - 
From: "Andy Furniss" <[EMAIL PROTECTED]>
To: "Chris Bennett" <[EMAIL PROTECTED]>
Cc: 
Sent: Tuesday, April 12, 2005 12:29 PM
Subject: Re: [LARTC] HTB ATM MPU OVERHEAD (without any patching)

To be safe you need overhead alot bigger than 24.
___
LARTC mailing list
LARTC@mailman.ds9a.nl
http://mailman.ds9a.nl/cgi-bin/mailman/listinfo/lartc


Re: [LARTC] HTB ATM MPU OVERHEAD (without any patching)

2005-04-12 Thread Andy Furniss
Chris Bennett wrote:
I know there is that handy patch available to very efficiently use ATM 
bandwidth, but I was wondering what the best values to use with a 
non-patched iproute2 would be.  Anyone here care to check my logic in 
coming up with these numbers and perhaps suggest better values?

My transmit speed is 768kbps per ADSL line (I have two).
Is it your sync speed or what speed it's advertised by ISP - they may be 
different and if you patch/allow for overhead properly you can approach 
sync speed. Most modems tell you this.

Also bps means bytes/sec to TC I assume you mean kbit and depending on 
TC version old TC k or K = 1024, newer  TC k=1000 I think K still 1024.

# create leaf classes
# ACKs, ICMP, VOIP
tc class add dev eth0 parent 1:1 classid 1:20 htb \
 rate 50kbit \
OK as long as you can be sure voip+ack+icmp never > 50kbit you will 
queue if other classes are full otherwise.


Here's the logic I used to come up with these numbers::
The maximum real bandwidth, assuming no waste of data, is 768 * 48 / 53 
= 695.  This accounts for the fact that ATM packets are 53 bytes, with 
bytes being overhead.  So that's the overall rate that I'm working with.
Depending on sync speed mentioned above that may be OK for ATM data rate ..
I then set the MPU to 96 (2 * 48) since the minimum ethernet packet (64 
bytes) uses two ATM packets (each having 48 bytes of data).  I use 48 
instead of 53 here because we already accounted for the ATM overhead in 
the previous calculation.

For the overhead I use 24, since nearly each ethernet packet
HTB uses IP packet length, to which you then need to add your fixed 
overhead - for pppoe that may include eth header + other have a look at 
jesper's table in his thesis.

http://www.adsl-optimizer.dk/
On top of that you need to account for the fact that depending on 
original packet length upto 47 bytes padding gets added to make up a 
complete cell.

To be safe you need overhead alot bigger than 24.
I run some game servers so I was able to do a real world test.  With the 
game servers maxed out, I started an FTP upload... the latency for the 
players went up slightly, but not too much.. maybe just 30ms or so.  So 
from that perspective this seems to be working well.
You could tweak up htb and rules to help this.
In net/sched/sch_htb.c there is a define hysteresis it defaults to 1, 
which makes htb dequeue packets in pairs even if you specify quantum = 
MTU on leafs. Setting it to 0 fixes this - and specifying quantum = MTU 
on leafs and burst/cburst 10b on bulk classes let's me never delay an 
interactive packet longer than the transmit time of a bulk size packet.

Andy.
___
LARTC mailing list
LARTC@mailman.ds9a.nl
http://mailman.ds9a.nl/cgi-bin/mailman/listinfo/lartc


Re: [LARTC] HTB ATM MPU OVERHEAD (without any patching)

2005-04-12 Thread Chris Bennett
I was able to do some further testing today with a full crew of players on 
my game servers.  I cleaned up my script a bit to make it easier to modify 
the MPU and OVERHEAD, and also added both settings to the root class for 
completeness sake (not sure that matters at all).  I'll include the final 
script at the end for posterity.

The final result (in my case) that worked best:
I kept rate at 695 (which is my provisioned upload rate of 768 * 48 / 53, to 
account for ATM overhead)
I left to MPU at 96 (since it takes two ATM packets (2 * 48) for the minimum 
ethernet packet)
For overhead, I went with 50.  This is higher than the calculated the 
"optimum" of 24 (half of an ATM packet).

I thought about why the overhead value of 50 works best, and I really can't 
say for sure.  For a while I was thinking about the possibility that I had 
the calculation wrong and that an overhead of 48 might make more sense 
(since a full packet is still necessary to carry any non-packet-aligned 
leftover data), but 48 + x still means that we accounted for x more bytes 
than necessary (with x being between 0-47), which means that overall it 
should average out to an overhead of 24 for all packets.  So I just don't 
know.  Apparently, for whatever reason, there appears to be more overhead 
than expected, though.

This is working beautifully in any case.  A usual glance at my players has 
them showing pings from 70 up to 300 or so (people with higher pings tend to 
leave since it becomes unplayable).  For this series of tests  I latched 
onto a few player with particularly low pings, and then started an FTP 
transfer to my shell account at my ISP.  With these settings there was a 
slight initial bump of just a few milliseconds, but after a minute the pings 
actually returned to where they had been in the beginning.  So yes, I am 
saying that there was ZERO latency increase for the players.  I'm amazed.

While the overhead of 50 is higher than expected, it does appear to be very 
close to optimum.  Even dropping it to 40 causes noticeable differences if I 
let the FTP transfer run for a long time.  In essence the overhead value is 
serving here as the tuning between large and small packets, with higher 
overheads giving more weight to the smaller packets.  I don't want to give 
too little weight to the small packets because then there is not enough 
bandwidth reserved for them.  Too much weight causes too much bandwidth to 
be reserved, forcing the big packets to needlessly slow down.  In my case 50 
works just perfectly, though for other people different values may be found 
to be appropriate.

I'm sure that an IPROUTE2 patched with the Wildgoose ATM patch works even 
better, but for an unpatched IPROUTE2 I think this is working pretty darn 
well.

This is the final cleaned up partial script I ended up with:
CEIL=695
MPU=96
OVERHEAD=50
# create HTB root qdisc
tc qdisc add dev eth0 root handle 1: htb default 22
# create classes
tc class add dev eth0 parent 1: classid 1:1 htb \
 rate ${CEIL}kbit \
 mpu ${MPU} \
 overhead ${OVERHEAD}
# create leaf classes
# ACKs, ICMP, VOIP
tc class add dev eth0 parent 1:1 classid 1:20 htb \
 rate 50kbit \
 ceil ${CEIL}kbit \
 prio 0 \
 mpu ${MPU} \
 overhead ${OVERHEAD}
# GAMING
tc class add dev eth0 parent 1:1 classid 1:21 htb \
 rate 600kbit \
 ceil ${CEIL}kbit \
 prio 1 \
 mpu ${MPU} \
 overhead ${OVERHEAD}
# NORMAL
tc class add dev eth0 parent 1:1 classid 1:22 htb \
 rate 15kbit \
 ceil ${CEIL}kbit \
 prio 2 \
 mpu ${MPU} \
 overhead ${OVERHEAD}
# LOW PRIORITY (WWW SERVER, FTP SERVER)
tc class add dev eth0 parent 1:1 classid 1:23 htb \
 rate 15kbit \
 ceil ${CEIL}kbit \
 prio 3 \
 mpu ${MPU} \
 overhead ${OVERHEAD}
# P2P (BITTORRENT, FREENET)
tc class add dev eth0 parent 1:1 classid 1:24 htb \
 rate 15kbit \
 ceil ${CEIL}kbit \
 prio 4 \
 mpu ${MPU} \
 overhead ${OVERHEAD}
# attach qdiscs to leaf classes - using SFQ for fairness
tc qdisc add dev eth0 parent 1:20 handle 20: sfq perturb 10
tc qdisc add dev eth0 parent 1:21 handle 21: sfq perturb 10
tc qdisc add dev eth0 parent 1:22 handle 22: sfq perturb 10
tc qdisc add dev eth0 parent 1:23 handle 23: sfq perturb 10
tc qdisc add dev eth0 parent 1:24 handle 24: sfq perturb 10
# create filters to determine to which queue each packet goes
tc filter add dev eth0 parent 1:0 prio 0 protocol ip handle 20 fw flowid 
1:20
tc filter add dev eth0 parent 1:0 prio 0 protocol ip handle 21 fw flowid 
1:21
tc filter add dev eth0 parent 1:0 prio 0 protocol ip handle 22 fw flowid 
1:22
tc filter add dev eth0 parent 1:0 prio 0 protocol ip handle 23 fw flowid 
1:23
tc filter add dev eth0 parent 1:0 prio 0 protocol ip handle 24 fw flowid 
1:24

- Original Message - 
From: "Jason Boxman" <[EMAIL PROTECTED]>
To: 
Sent: Monday, April 11, 2005 7:14 PM
Subject: Re: [LARTC] HTB ATM MPU OVERHEAD (without any patching)

Those are always the thoughts I had.  I never successfully played around 
with
overhead 

Re: [LARTC] HTB ATM MPU OVERHEAD (without any patching)

2005-04-11 Thread Chris Bennett
I'm running some tests as you suggested.  I'll have to wait til there are 
more players on my servers again before I can get some more accurate 
results, but preliminary tests show that 1) the overhead setting is quite 
significant and 2) my overhead value of 24 is a bit too low.

With overhead set at 24, the latency for the players is initially not 
greatly affected by adding an FTP transfer to the mix, but when I let the 
FTP run for a long time eventually the latency starts climbing.  This would 
seem to indicate that something in the calculation is not quite right, and a 
buffer somewhere is slowly filling up.

This is just preliminary... hopefully I'll have something more solid 
tomorrow.

- Original Message - 
From: "Jason Boxman" <[EMAIL PROTECTED]>
To: 
Sent: Monday, April 11, 2005 7:14 PM
Subject: Re: [LARTC] HTB ATM MPU OVERHEAD (without any patching)


Those are always the thoughts I had.  I never successfully played around 
with
overhead or MPU though.  Did you compare results with and without using
overhead and mpu settings?
___
LARTC mailing list
LARTC@mailman.ds9a.nl
http://mailman.ds9a.nl/cgi-bin/mailman/listinfo/lartc


Re: [LARTC] HTB ATM MPU OVERHEAD (without any patching)

2005-04-11 Thread Jason Boxman
On Monday 11 April 2005 20:01, Chris Bennett wrote:

> The maximum real bandwidth, assuming no waste of data, is 768 * 48 / 53 =
> 695.  This accounts for the fact that ATM packets are 53 bytes, with bytes
> being overhead.  So that's the overall rate that I'm working with.

Check.

> I then set the MPU to 96 (2 * 48) since the minimum ethernet packet (64
> bytes) uses two ATM packets (each having 48 bytes of data).  I use 48
> instead of 53 here because we already accounted for the ATM overhead in the
> previous calculation.

Makes sense.

> For the overhead I use 24, since nearly each ethernet packet is going to
> end up splitting an ATM packet at some point.  Just going with law of
> averages (instead of a real world statistical analysis), I'm going with
> each ethernet packet wasting (on average) half of an ATM packet.  Again
> using 48 as the ATM packet size (since we accounted for ATM overhead
> already), 48 / 2 = 24.

Indeed.

> A theoretical comparison of sending 10,000 bytes via various packet would
> therefore look like this:
>

> Which of course indicates just how much bandwidth small packets waste...

Yeah.

> So, is this logic crap or what?  Should this at least be close to optimum
> or did I forget something important or make an erroneous assumption?

Those are always the thoughts I had.  I never successfully played around with 
overhead or MPU though.  Did you compare results with and without using 
overhead and mpu settings?


> One thing I wish I could do would be to query the DSL modem to see exactly
> how much bandwidth usage it is reporting, but unfortunately my ISP now uses
> these crappy new ADSL modems that don't support SNMP :( :(  My old SDSL
> router did, and I miss that feature a lot, but not enough to buy new ADSL
> modems myself.

Yeah, my modem provides no useful information either.

-- 

Jason Boxman
Perl Programmer / *NIX Systems Administrator
Shimberg Center for Affordable Housing | University of Florida
http://edseek.com/ - Linux and FOSS stuff

___
LARTC mailing list
LARTC@mailman.ds9a.nl
http://mailman.ds9a.nl/cgi-bin/mailman/listinfo/lartc


[LARTC] HTB ATM MPU OVERHEAD (without any patching)

2005-04-11 Thread Chris Bennett
I know there is that handy patch available to very efficiently use ATM 
bandwidth, but I was wondering what the best values to use with a 
non-patched iproute2 would be.  Anyone here care to check my logic in coming 
up with these numbers and perhaps suggest better values?

My transmit speed is 768kbps per ADSL line (I have two).  This is the HTB 
shaping I do on the interface (logic used for this follows):

# create HTB root qdisc
tc qdisc add dev eth0 root handle 1: htb default 22
# create classe
tc class add dev eth0 parent 1: classid 1:1 htb \
 rate 695kbit
# create leaf classes
# ACKs, ICMP, VOIP
tc class add dev eth0 parent 1:1 classid 1:20 htb \
 rate 50kbit \
 ceil 695kbit \
 prio 0 \
 mpu 96 \
 overhead 24
# GAMING
tc class add dev eth0 parent 1:1 classid 1:21 htb \
 rate 600kbit \
 ceil 695kbit \
 prio 1 \
 mpu 96 \
 overhead 24
# NORMAL
tc class add dev eth0 parent 1:1 classid 1:22 htb \
 rate 15kbit \
 ceil 695kbit \
 prio 2 \
 mpu 96 \
 overhead 24
# LOW PRIORITY (WWW SERVER, FTP SERVER)
tc class add dev eth0 parent 1:1 classid 1:23 htb \
 rate 15kbit \
 ceil 695kbit \
 prio 3 \
 mpu 96 \
 overhead 24
# P2P
tc class add dev eth0 parent 1:1 classid 1:24 htb \
 rate 15kbit \
 ceil 695kbit \
 prio 4 \
 mpu 96 \
 overhead 24
Here's the logic I used to come up with these numbers::
The maximum real bandwidth, assuming no waste of data, is 768 * 48 / 53 = 
695.  This accounts for the fact that ATM packets are 53 bytes, with bytes 
being overhead.  So that's the overall rate that I'm working with.

I then set the MPU to 96 (2 * 48) since the minimum ethernet packet (64 
bytes) uses two ATM packets (each having 48 bytes of data).  I use 48 
instead of 53 here because we already accounted for the ATM overhead in the 
previous calculation.

For the overhead I use 24, since nearly each ethernet packet is going to end 
up splitting an ATM packet at some point.  Just going with law of averages 
(instead of a real world statistical analysis), I'm going with each ethernet 
packet wasting (on average) half of an ATM packet.  Again using 48 as the 
ATM packet size (since we accounted for ATM overhead already), 48 / 2 = 24.

A theoretical comparison of sending 10,000 bytes via various packet would 
therefore look like this:

1000 packets of 10 bytes each = 1000 (packets) * 96 (mpu) + 1000 * 24 
(overhead) = 120,000 bytes
100 packets of 100 bytes each = 100 (packets) * 100 (bytes) + 100 * 24 
(overhead) = 12,400 bytes
10 packets of 1000 bytes each = 10 (packets) * 1000 (bytes) + 10 * 24 
(overhead) = 10,240 bytes

Which of course indicates just how much bandwidth small packets waste...
So, is this logic crap or what?  Should this at least be close to optimum or 
did I forget something important or make an erroneous assumption?

I run some game servers so I was able to do a real world test.  With the 
game servers maxed out, I started an FTP upload... the latency for the 
players went up slightly, but not too much.. maybe just 30ms or so.  So from 
that perspective this seems to be working well.

One thing I wish I could do would be to query the DSL modem to see exactly 
how much bandwidth usage it is reporting, but unfortunately my ISP now uses 
these crappy new ADSL modems that don't support SNMP :( :(  My old SDSL 
router did, and I miss that feature a lot, but not enough to buy new ADSL 
modems myself.

Chris 

___
LARTC mailing list
LARTC@mailman.ds9a.nl
http://mailman.ds9a.nl/cgi-bin/mailman/listinfo/lartc