Re: [LARTC] tc shown rate larger than ceil (was Weird rate in HTB)

2007-08-02 Thread Stonie Cooper
Andy - I spoke too soon; there were evidently just a few giants left  
in the queue when I did the change.  After an hour, all giants  
ceased.  Thanks again - it works much better now.


Stone

On Aug 1, 2007, at 4:26 PM, Andy Furniss wrote:


Stonie Cooper wrote:

Andy - Thanks!
I took the ethtool path, and turned off tcp segmentation on that  
nic; I was unsure how to set the HTB MTU - I use shorewall on a  
Gentoo system.


I think you would need to find each line with a rate in the script  
and add mtu X - I don't know what X would be, though.


It has definitely made a difference.  The highest rate I see is  
282312bits on a line with a ceil of 282000bits - and that is with  
24pps, which, according to your previous email . . . would be  
right in line with what it should be.


If you are shaping for a wan then I would turn off tso rather than  
use mtu. It should be better for latency as a big chunk of data is  
going to take more time to be transmitted and hurt interactive  
traffic. I can't imagine it being very good to effectively drop  
multiple segments at once either - but then shorewall may not setup  
with queues short enough to get drops.


Giants have fallen off, but are not completely eliminated.  Is  
this something I should be concerned with (having giants)?


As long as there aren't many I wouldn't care - I don't know why you  
still see them, though. I haven't got any nics that do tso so can't  
test.


  Would setting the
HTB MTU be more elegant?  I have avoided creating my own tc  
script, and have been using shorewall's internals . . . but if  
utilizing the HTB MTU setting is better, I will dive in and try  
to write a script that does the same thing as shorewall.


You would also loose some accuracy in the rate lookup tables if you  
used mtu 16,32,64 etc. bytes depending on how big an mtu you needed  
to use.


If you have a slow dsl link and really care about latency or not  
having to back off from it's rate, then there are other ways to  
tweak things. They involve patching and recompiling kernel/tc.


Andy.


___
LARTC mailing list
LARTC@mailman.ds9a.nl
http://mailman.ds9a.nl/cgi-bin/mailman/listinfo/lartc


Re: [LARTC] tc shown rate larger than ceil (was Weird rate in HTB)

2007-08-01 Thread Andy Furniss

Stonie Cooper wrote:
An earlier exchange about someone seeing the rate larger than the 
ceiling is posted below.  Andy explained the reason for the above 
ceiling rate in Daniel's output . . . but I just saw an example that 
doesn't fit.


  tc output 
class htb 1:14 parent 1:1 leaf 14: prio 1 quantum 3072 rate 256000bit 
ceil 282000bit burst 1820b/8 mpu 0b overhead 0b cburst 1851b/8 mpu 0b 
overhead 0b level 0

Sent 2639448 bytes 1128 pkt (dropped 0, overlimits 0 requeues 0)
rate 325360bit 17pps backlog 0b 25p requeues 0
lended: 981 borrowed: 122 giants: 1155


It's because you have giants - possibly because your nic does tcp 
segmentation offload so locally generated traffic goes through htb as 
big chunks before it gets segmented down to the nic's mtu.


You can check/turn that off with ethtool -k or you could use htb's mtu 
parameter for each rate (I'm not sure if you need to do ceils aswell) 
which makes the granularity of the rate table bigger so it can handle 
the larger mtu.


Andy.
___
LARTC mailing list
LARTC@mailman.ds9a.nl
http://mailman.ds9a.nl/cgi-bin/mailman/listinfo/lartc


Re: [LARTC] tc shown rate larger than ceil (was Weird rate in HTB)

2007-08-01 Thread Stonie Cooper

Andy - Thanks!

I took the ethtool path, and turned off tcp segmentation on that nic;  
I was unsure how to set the HTB MTU - I use shorewall on a Gentoo  
system.


It has definitely made a difference.  The highest rate I see is  
282312bits on a line with a ceil of 282000bits - and that is with  
24pps, which, according to your previous email . . . would be right  
in line with what it should be.


Giants have fallen off, but are not completely eliminated.  Is this  
something I should be concerned with (having giants)?  Would setting  
the HTB MTU be more elegant?  I have avoided creating my own tc  
script, and have been using shorewall's internals . . . but if  
utilizing the HTB MTU setting is better, I will dive in and try to  
write a script that does the same thing as shorewall.


Stone

On Aug 1, 2007, at 1:43 PM, Andy Furniss wrote:


Stonie Cooper wrote:
An earlier exchange about someone seeing the rate larger than the  
ceiling is posted below.  Andy explained the reason for the above  
ceiling rate in Daniel's output . . . but I just saw an example  
that doesn't fit.

  tc output 
class htb 1:14 parent 1:1 leaf 14: prio 1 quantum 3072 rate  
256000bit ceil 282000bit burst 1820b/8 mpu 0b overhead 0b cburst  
1851b/8 mpu 0b overhead 0b level 0

Sent 2639448 bytes 1128 pkt (dropped 0, overlimits 0 requeues 0)
rate 325360bit 17pps backlog 0b 25p requeues 0
lended: 981 borrowed: 122 giants: 1155


It's because you have giants - possibly because your nic does tcp  
segmentation offload so locally generated traffic goes through htb  
as big chunks before it gets segmented down to the nic's mtu.


You can check/turn that off with ethtool -k or you could use htb's  
mtu parameter for each rate (I'm not sure if you need to do ceils  
aswell) which makes the granularity of the rate table bigger so it  
can handle the larger mtu.


Andy.


___
LARTC mailing list
LARTC@mailman.ds9a.nl
http://mailman.ds9a.nl/cgi-bin/mailman/listinfo/lartc


Re: [LARTC] tc shown rate larger than ceil (was Weird rate in HTB)

2007-08-01 Thread Andy Furniss

Stonie Cooper wrote:

Andy - Thanks!

I took the ethtool path, and turned off tcp segmentation on that nic; I 
was unsure how to set the HTB MTU - I use shorewall on a Gentoo system.


I think you would need to find each line with a rate in the script and 
add mtu X - I don't know what X would be, though.




It has definitely made a difference.  The highest rate I see is 
282312bits on a line with a ceil of 282000bits - and that is with 24pps, 
which, according to your previous email . . . would be right in line 
with what it should be.


If you are shaping for a wan then I would turn off tso rather than use 
mtu. It should be better for latency as a big chunk of data is going to 
take more time to be transmitted and hurt interactive traffic. I can't 
imagine it being very good to effectively drop multiple segments at once 
either - but then shorewall may not setup with queues short enough to 
get drops.




Giants have fallen off, but are not completely eliminated.  Is this 
something I should be concerned with (having giants)?


As long as there aren't many I wouldn't care - I don't know why you 
still see them, though. I haven't got any nics that do tso so can't test.


  Would setting the
HTB MTU be more elegant?  I have avoided creating my own tc script, and 
have been using shorewall's internals . . . but if utilizing the HTB MTU 
setting is better, I will dive in and try to write a script that does 
the same thing as shorewall.


You would also loose some accuracy in the rate lookup tables if you used 
mtu 16,32,64 etc. bytes depending on how big an mtu you needed to use.


If you have a slow dsl link and really care about latency or not having 
to back off from it's rate, then there are other ways to tweak things. 
They involve patching and recompiling kernel/tc.


Andy.
___
LARTC mailing list
LARTC@mailman.ds9a.nl
http://mailman.ds9a.nl/cgi-bin/mailman/listinfo/lartc


[LARTC] tc shown rate larger than ceil (was Weird rate in HTB)

2007-07-31 Thread Stonie Cooper
An earlier exchange about someone seeing the rate larger than the  
ceiling is posted below.  Andy explained the reason for the above  
ceiling rate in Daniel's output . . . but I just saw an example that  
doesn't fit.


 tc output 
class htb 1:14 parent 1:1 leaf 14: prio 1 quantum 3072 rate 256000bit  
ceil 282000bit burst 1820b/8 mpu 0b overhead 0b cburst 1851b/8 mpu 0b  
overhead 0b level 0

Sent 2639448 bytes 1128 pkt (dropped 0, overlimits 0 requeues 0)
rate 325360bit 17pps backlog 0b 25p requeues 0
lended: 981 borrowed: 122 giants: 1155
tokens: -48548 ctokens: -56127
 tc outpu 

I see a 43360 difference, where the rate is more than the  
ceiling . . . of which only 952bits are accounted for in the  
following exchange.  I have actually seen the rate be as much as  
80408bits off . . . at only 19pps.


Is there something else I am missing?

Stone


Daniel Harold L. wrote On Tuesday 03 July 2007 22:50:
 Dear all,

 First, sorry for my bad English ..

 To night one of my client is the victim of UDP attack from  
internet. It's tons
 of UDP packets from internet with destination to port 80. But  
when I look at
 class of that victim client, the actual class rate is over than  
configured

 rate class.

 Below is my screen capture. You can see at class 1:913 which have  
actual rate
 105136bit while configured with ceil at 96000bit. Also it's  
parent class
 (1:91) which have actual rate 107680bit while configured with  
ceil at

 96000bit.

 Is this normal? Or I have miss something in my script. Sometimes  
ago I found
 this situation but I forgot to capture the screen and the traffic  
is UDP too

 (maybe from torrent-like client)

Yes it is normal!

The rate tables that tc use normally have an 8 byte steps, so it is
possible for up to a 56bit/s error per packet and you have 300 pps.

There was a small patch submitted for tc to make the error fall on the
underrate rather than overrate side, but I think it got lost in the
middle of the long ATM overhead patch thread on netdev.

Andy.
___
LARTC mailing list
LARTC@mailman.ds9a.nl
http://mailman.ds9a.nl/cgi-bin/mailman/listinfo/lartc