Re: [LARTC] RV: LATENCY PROBLEMS

2004-05-14 Thread Andreas Klauer
Am Friday 14 May 2004 23:59 schrieb Jason Boxman:
> > Anyway - even if I weren't using IPP2P, P2P traffic wouldn't really
> > matter since I put all traffic into user classes. So the only person
> > who's suffering would be the P2P user. And I don't really care about
> > that. ;-)
>
> Interesting.  How did you accomplish that?

It's nothing special. Each user gets his/her own HTB class. No matter what 
kind of traffic the user generates (Interactive, P2P, ...), it shouldn't 
affect the other users. This works well in small networks. It wouldn't 
work well if I had to create 500 classes for a ADSL line, because the 
guaranteed rates per user would be too low.

> So right now each user is getting a prio disc?

It looks like this:
http://www.metamorpher.de/files/fairnat_ipp2p.png
(warning, big image: ~5000x2000 pixels)
Blue = qdisc, Green = class; Blue arrow = filter.

Please note that 1:3 is a class for local LAN traffic. 
The other are user classes.

Every user gets a prio qdisc with 4 bands (4th band for P2P, everything 
else gets classified by the TOS priomap). The prio leafs get SFQ on top.
Don't know if that actually makes sense, but it works well for me.

Andreas
___
LARTC mailing list / [EMAIL PROTECTED]
http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/


Re: [LARTC] RV: LATENCY PROBLEMS

2004-05-14 Thread Jason Boxman
On Friday 14 May 2004 16:57, Andreas Klauer wrote:

> > rebecca:~# cat /etc/l7-protocols/edonkey.pat
> > ...
> > ^[\xe3\xc5\xe5\xd4].?.?.?.? \
> > ([\x01\x02\x05\x14\x15\x16\x18\x19\x1a\x1b\x1c\x20\x21\x32\x33\x34 \
> > \x35\x46\x38\x40\x41\x42\x43\x46\x47\x48\x49\x4a\x4b\x4c\x4d\x4e\x4f \
> > \x50\x51\x52\x53\x54\x55\x56\x57\x58\x5b\x5c\x60\x81\x82\x90\x91\x93 \
> > \x96\x97\x98\x99\x9a\x9b\x9c\x9e\xa0\xa1\xa2\xa3\xa4]| \
> > \x59?[ -~]|\x96$)
> > ...
> > #ipp2p essentially uses "\xe3\x47", which doesn't seem at all right
> > to me. ...
>
> It seems that you have done quite a research here.
> You should definitely tell the IPP2P author about this.

Research, yeah.  But I didn't write the above filter. :)  I just found it 
works a lot better than the one IPP2P is presently using.

> Anyway - even if I weren't using IPP2P, P2P traffic wouldn't really matter
> since I put all traffic into user classes. So the only person who's
> suffering would be the P2P user. And I don't really care about that. ;-)

Interesting.  How did you accomplish that?  I'm used to using HTB where you 
apportion fixed amounts of bandwidth per class.  Does each user have a fixed 
rate?  Sounds like a cool configuration.

> I'm not specially limiting P2P traffic either. I just put it into a fourth
> prio band of a prio qdisc, which works okay. Probably I should try
> creating two HTB classes per user instead. But this would mean having 3
> HTB classes per user (one user class, one default child, one p2p child)
> which is a bit much.

So right now each user is getting a prio disc?

> Andreas

___
LARTC mailing list / [EMAIL PROTECTED]
http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/


Re: [LARTC] RV: LATENCY PROBLEMS

2004-05-14 Thread Andreas Klauer
Am Friday 14 May 2004 21:06 schrieb Jason Boxman:
> Precisely.  I have mine limited to 8kbit on a 256kbit connection.  The
> HTB manual mentions some interesting results[1] when you mess with prio,
> though.

Yes, prio is very interesting... ;-)

> rebecca:~# cat /etc/l7-protocols/edonkey.pat
> ...
> ^[\xe3\xc5\xe5\xd4].?.?.?.? \
> ([\x01\x02\x05\x14\x15\x16\x18\x19\x1a\x1b\x1c\x20\x21\x32\x33\x34 \
> \x35\x46\x38\x40\x41\x42\x43\x46\x47\x48\x49\x4a\x4b\x4c\x4d\x4e\x4f \
> \x50\x51\x52\x53\x54\x55\x56\x57\x58\x5b\x5c\x60\x81\x82\x90\x91\x93 \
> \x96\x97\x98\x99\x9a\x9b\x9c\x9e\xa0\xa1\xa2\xa3\xa4]| \
> \x59?[ -~]|\x96$)
> ...
> #ipp2p essentially uses "\xe3\x47", which doesn't seem at all right
> to me. ...

It seems that you have done quite a research here.
You should definitely tell the IPP2P author about this.

Anyway - even if I weren't using IPP2P, P2P traffic wouldn't really matter 
since I put all traffic into user classes. So the only person who's 
suffering would be the P2P user. And I don't really care about that. ;-)

I'm not specially limiting P2P traffic either. I just put it into a fourth 
prio band of a prio qdisc, which works okay. Probably I should try 
creating two HTB classes per user instead. But this would mean having 3 
HTB classes per user (one user class, one default child, one p2p child) 
which is a bit much.

Andreas
___
LARTC mailing list / [EMAIL PROTECTED]
http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/


Re: [LARTC] RV: LATENCY PROBLEMS

2004-05-14 Thread Jason Boxman
On Friday 14 May 2004 13:26, Andreas Klauer wrote:
> Am Friday 14 May 2004 17:18 schrieb GoMi:
> > I am network administrator for my university dorm. We are about 300
> > users, and we have 2 ADSL connections doing load balancing with 300kbits
> > upstream and 2Mbit downstream.
>
> That's not much bandwidth if there are a lot of p2p users.

Indeed.

> > and with ipp2p I mark p2p traffic allocating it under the
> > non-interactive queue.
>
> That's good - but you didn't write anything about your setup.
> Make sure that p2p traffic is in a class that gets restricted
> aggressively. If you're using HTB, the P2P class should have
> to borrow everything and have lower ceil and prio than other classes.

Precisely.  I have mine limited to 8kbit on a 256kbit connection.  The HTB 
manual mentions some interesting results[1] when you mess with prio, though.

(I just lowered the ceil for my p2p bulk class to 16kbit less than my max, 
192kbit, and I'm experiencing a significant decrease in lag.  Thanks 
Andreas!)

[1] http://luxik.cdi.cz/~devik/qos/htb/manual/userg.htm#prio

When using IPP2P[2] with 2.4.24 I found that, at least with the edonkey2000 
network, it frequently missed enough packets to kill interactivity on my 
link.  Packets that should not have were finding their way into the class for 
TCP ACKs and ICMP and into the default class, indicating packets were being 
missed.  I was using CONNMARK was suggested to maintain state on which 
connections were edonkey2000.

I upgraded to 2.6.6 last night and patched my kernel and IPTables with 
L7-Filter[3].  I'm using the edonkey2000 layer 7 match and it's proving to be 
extremely effective, even with the Overnet protocol for edonkey2000 enabled 
which is when I started having problems with IPP2P[2].

A quick snippet of the pattern match[4] for edonkey2000 from L7 indicates just 
how nasty the match is, and why IPP2P[2] might be missing some packets:

rebecca:~# cat /etc/l7-protocols/edonkey.pat
...
^[\xe3\xc5\xe5\xd4].?.?.?.? \
([\x01\x02\x05\x14\x15\x16\x18\x19\x1a\x1b\x1c\x20\x21\x32\x33\x34 \
\x35\x46\x38\x40\x41\x42\x43\x46\x47\x48\x49\x4a\x4b\x4c\x4d\x4e\x4f \
\x50\x51\x52\x53\x54\x55\x56\x57\x58\x5b\x5c\x60\x81\x82\x90\x91\x93 \
\x96\x97\x98\x99\x9a\x9b\x9c\x9e\xa0\xa1\xa2\xa3\xa4]| \
\x59?[ -~]|\x96$)
...
#ipp2p essentially uses "\xe3\x47", which doesn't seem at all right to me.
...
EOF

[2] http://rnvs.informatik.uni-leipzig.de/ipp2p/index_en.html
[3] http://l7-filter.sourceforge.net/
[4] http://sourceforge.net/project/showfiles.php?\
  group_id=80085&package_id=81756


> Andreas

___
LARTC mailing list / [EMAIL PROTECTED]
http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/


RE: [LARTC] RV: LATENCY PROBLEMS

2004-05-14 Thread GoMi
Hi there Roy, first of all, thanks :)

I am talking about http, smtp, pop3 interactive traffic for example.

As I said my DSL has 300kbit upstream and 2Mbit downstream, and I have an
HTB rate of 150kbit and an ingress policy running at 1500kbit for downstream
control. That way I can get sure there are no queues either at my ISP or my
DSL modem, and the queues are managed at the Linux box right?

I also have iplimit module and each user is limited to 15 parallel
connections, so they can not cope with all the BW. 

-Mensaje original-
De: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] En
nombre de Roy
Enviado el: viernes, 14 de mayo de 2004 18:20
Para: GoMi; [EMAIL PROTECTED]
Asunto: Re: [LARTC] RV: LATENCY PROBLEMS

You did not say, what kind of interactive traffic you have,
and is your dsl capable to hold it
800 connections should not be a problem for such cpu,
if you are sure you did not satureted your dsl, in both directions,
and how about http trafiic?

then here is another possibility,  even if tcp can be shaped on forward,
there is still one problem,
if your link as almost full, new tcp connections are not controlable at
start.
all you can do for now is to limit your max rate to 80% of link speed.
this problem usualy happens when someone is using bittorent. that software
creates 20 or more connections for  each file
each tcp connection initiation takes about 3kb of data which cant be shaped.

At first try to limit your link speed to 50% of capacity , then check if it
helps to reduce latency.

I am working on imq driver with tcp prediction, which should fix some
problems.

- Original Message - 
From: "GoMi" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Friday, May 14, 2004 6:18 PM
Subject: [LARTC] RV: LATENCY PROBLEMS


> Hello there,
>I'm having lots of problems with my setup here. Let me explain:
>
> I am network administrator for my university dorm. We are about 300 users,
> and we have 2 ADSL connections doing load balancing with 300kbits upstream
> and 2Mbit downstream.
>
> The load balancing is working great, we are doing connection tracking so I
> can mark and hence prioritize interactive traffic and ACKS on the
upstream,
> and with ipp2p I mark p2p traffic allocating it under the non-interactive
> queue.
>
> The problem comes when there is more than 70 users + or -, when
interactive
> traffic stops working at all, or it has a very VERY high latency.
>
> I have a setup based on HTB and WRR.
>
> I have only two possible explanations, or either the CONNMARK module
> introduces a very high latency when the number of entries on
> /pron/net/ip_conntrack rises above lets say 800, or maybe the Ethernets I
> have (cheap Ethernets) are getting saturated with that amount of traffic.
>
> My server is a AMD Duron 800MHz with 768Mb of RAM.
>
> Anyone knowing the marvellous solution? :)
>
> Thank you guys..
>
>
> ___
> LARTC mailing list / [EMAIL PROTECTED]
> http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/
>

___
LARTC mailing list / [EMAIL PROTECTED]
http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/



___
LARTC mailing list / [EMAIL PROTECTED]
http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/


Re: [LARTC] RV: LATENCY PROBLEMS

2004-05-14 Thread Andreas Klauer
Am Friday 14 May 2004 17:18 schrieb GoMi:
> I am network administrator for my university dorm. We are about 300
> users, and we have 2 ADSL connections doing load balancing with 300kbits
> upstream and 2Mbit downstream.

That's not much bandwidth if there are a lot of p2p users.

> and with ipp2p I mark p2p traffic allocating it under the
> non-interactive queue.

That's good - but you didn't write anything about your setup.
Make sure that p2p traffic is in a class that gets restricted
aggressively. If you're using HTB, the P2P class should have
to borrow everything and have lower ceil and prio than other classes.

> I have only two possible explanations, or either the CONNMARK module
> introduces a very high latency when the number of entries on
> /pron/net/ip_conntrack rises above lets say 800, or maybe the Ethernets
> I have (cheap Ethernets) are getting saturated with that amount of
> traffic.
>
> My server is a AMD Duron 800MHz with 768Mb of RAM.

I doubt that it's CONNMARK fault. I only got a 233MHz machine doing the
routing and still the CPU has loads of idle time. That little network
traffic isn't so bad and 800 connections are few for 70 users.

Andreas
___
LARTC mailing list / [EMAIL PROTECTED]
http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/


Re: [LARTC] RV: LATENCY PROBLEMS

2004-05-14 Thread Roy
You did not say, what kind of interactive traffic you have,
and is your dsl capable to hold it
800 connections should not be a problem for such cpu,
if you are sure you did not satureted your dsl, in both directions,
and how about http trafiic?

then here is another possibility,  even if tcp can be shaped on forward,
there is still one problem,
if your link as almost full, new tcp connections are not controlable at
start.
all you can do for now is to limit your max rate to 80% of link speed.
this problem usualy happens when someone is using bittorent. that software
creates 20 or more connections for  each file
each tcp connection initiation takes about 3kb of data which cant be shaped.

At first try to limit your link speed to 50% of capacity , then check if it
helps to reduce latency.

I am working on imq driver with tcp prediction, which should fix some
problems.

- Original Message - 
From: "GoMi" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Friday, May 14, 2004 6:18 PM
Subject: [LARTC] RV: LATENCY PROBLEMS


> Hello there,
>I'm having lots of problems with my setup here. Let me explain:
>
> I am network administrator for my university dorm. We are about 300 users,
> and we have 2 ADSL connections doing load balancing with 300kbits upstream
> and 2Mbit downstream.
>
> The load balancing is working great, we are doing connection tracking so I
> can mark and hence prioritize interactive traffic and ACKS on the
upstream,
> and with ipp2p I mark p2p traffic allocating it under the non-interactive
> queue.
>
> The problem comes when there is more than 70 users + or -, when
interactive
> traffic stops working at all, or it has a very VERY high latency.
>
> I have a setup based on HTB and WRR.
>
> I have only two possible explanations, or either the CONNMARK module
> introduces a very high latency when the number of entries on
> /pron/net/ip_conntrack rises above lets say 800, or maybe the Ethernets I
> have (cheap Ethernets) are getting saturated with that amount of traffic.
>
> My server is a AMD Duron 800MHz with 768Mb of RAM.
>
> Anyone knowing the marvellous solution? :)
>
> Thank you guys..
>
>
> ___
> LARTC mailing list / [EMAIL PROTECTED]
> http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/
>

___
LARTC mailing list / [EMAIL PROTECTED]
http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/


[LARTC] RV: LATENCY PROBLEMS

2004-05-14 Thread GoMi
Hello there, 
   I'm having lots of problems with my setup here. Let me explain:

I am network administrator for my university dorm. We are about 300 users,
and we have 2 ADSL connections doing load balancing with 300kbits upstream
and 2Mbit downstream. 

The load balancing is working great, we are doing connection tracking so I
can mark and hence prioritize interactive traffic and ACKS on the upstream,
and with ipp2p I mark p2p traffic allocating it under the non-interactive
queue.

The problem comes when there is more than 70 users + or -, when interactive
traffic stops working at all, or it has a very VERY high latency.

I have a setup based on HTB and WRR. 

I have only two possible explanations, or either the CONNMARK module
introduces a very high latency when the number of entries on
/pron/net/ip_conntrack rises above lets say 800, or maybe the Ethernets I
have (cheap Ethernets) are getting saturated with that amount of traffic.

My server is a AMD Duron 800MHz with 768Mb of RAM.

Anyone knowing the marvellous solution? :)

Thank you guys.. 


___
LARTC mailing list / [EMAIL PROTECTED]
http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/


Re: [LARTC] HTB MPU

2004-05-14 Thread Andy Furniss
Jason Boxman wrote:
On Thursday 13 May 2004 13:28, Andreas Klauer wrote:

Am Thursday 13 May 2004 16:38 schrieb Andreas Klauer:

Am Thursday 13 May 2004 15:54 schrieb Andy Furniss:

I've just noticed that there is a patch on devik's site which does mpu
and overhead.
I'll give it a try. Thanks for the hint.
Well, patching was a little difficult... it didn't like the debian patch
and I didn't succeed in joining the two patches together because of the
weird inject stuff. But anyway.  It seems to work, and it looks useful, so
I added it to the "Hacks" section of my Fair NAT script together with a
patched binary.


Nifty.

But how do you determine what your minimum packet unit (MPU) is?  How about 
overhead for a PPPoE connection?
If you can get a cell count from your modem you can work it out with 
ping. I don't know what your pppoe is.

With shaping I can max my upstream and still maintain ~ 120ms ping times, but 
I'd like to get it down to around ~ 70ms.

Your upstream worst case depends on your bitrate and your MTU. If it's 
128k you add about 90ms, 256k 45ms for 1500b packets. What's yours?

Andy.

___
LARTC mailing list / [EMAIL PROTECTED]
http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/


[LARTC] Re: Multipath Connection problem on RH-8.0

2004-05-14 Thread Muhammad Reza
After some upgrading (RH-9.0-2.4.26-Julian Patch and  iproute2-ss020116) 
and re-configuration on network, amd learned from "2.4 Keeping them 
Alive" now i can seem can establishmed connection to internet w/ load 
balanced, thanks..doe a while, cause i have do some natting to my local 
network.

this my ip route show:
[EMAIL PROTECTED] root]# ip route list table all
default via 172.16.0.1 dev eth0  table 201  proto static  src 172.16.0.232
prohibit default  table 201  proto static  metric 1
default via 10.10.10.1 dev eth1  table 202  proto static  src 10.10.10.2
prohibit default  table 202  proto static  metric 1
default  table 222  proto static
  nexthop via 10.10.10.1  dev eth1 weight 1
  nexthop via 172.16.0.1  dev eth0 weight 1
10.10.10.0/30 dev eth1  proto kernel  scope link  src 10.10.10.2
172.16.0.0/24 dev eth0  proto kernel  scope link  src 172.16.0.232
192.168.0.0/24 dev eth2  proto kernel  scope link  src 192.168.0.1
127.0.0.0/8 dev lo  scope link
broadcast 10.10.10.3 dev eth1  table local  proto kernel  scope link  
src 10.10.10.2
local 10.10.10.2 dev eth1  table local  proto kernel  scope host  src 
10.10.10.2
broadcast 192.168.0.255 dev eth2  table local  proto kernel  scope link  
src 192.168.0.1
broadcast 127.255.255.255 dev lo  table local  proto kernel  scope link  
src 127.0.0.1
broadcast 10.10.10.0 dev eth1  table local  proto kernel  scope link  
src 10.10.10.2
broadcast 172.16.0.0 dev eth0  table local  proto kernel  scope link  
src 172.16.0.232
local 192.168.0.1 dev eth2  table local  proto kernel  scope host  src 
192.168.0.1
broadcast 192.168.0.0 dev eth2  table local  proto kernel  scope link  
src 192.168.0.1
local 172.16.0.232 dev eth0  table local  proto kernel  scope host  src 
172.16.0.232
broadcast 172.16.0.255 dev eth0  table local  proto kernel  scope link  
src 172.16.0.232
broadcast 127.0.0.0 dev lo  table local  proto kernel  scope link  src 
127.0.0.1
local 127.0.0.1 dev lo  table local  proto kernel  scope host  src 
127.0.0.1
local 127.0.0.0/8 dev lo  table local  proto kernel  scope host  src 
127.0.0.1

and my this is the script that i use; (thanks to why)

include 
#include 
int main ()

{
system ("ip link set eth0 up");
system ("ip addr flush dev eth0");
system ("ip addr add 172.16.0.232/24 brd 172.16.0.255 dev eth0");
system ("ip rule add prio 50 table main");
system ("ip route del default table main 2>/dev/null");
system ("ip link set eth1 up");
system ("ip addr flush dev eth1");
system ("ip addr add 10.10.10.2/30 brd 10.10.10.3 dev eth1");
system ("ip link set eth2 up");
system ("ip addr flush dev eth2");
system ("ip addr add 192.168.0.1/24 brd 192.168.0.255 dev eth2");
system ("ip rule add prio 201 from 172.16.0.232/24 table 201");
system ("ip route add default via 172.16.0.1 dev eth0 src 172.16.0.232 
proto static table 201");
system ("ip route append prohibit default table 202 metric 1 proto 
static");

system ("ip rule add prio 202 from 10.10.10.2/30 table 202");
system ("ip route add default via 10.10.10.1 dev eth1 src 10.10.10.2 
proto static table 202");
system ("ip route append prohibit default table 201 metric 1 proto 
static");

system ("ip rule add prio 222 table 222");
system ("ip route add default table 222 proto static nexthop via 
10.10.10.1 dev eth1 nexthop via 172.16.0.1 dev eth0");

printf("-[ Net Interfaces are Activated ]-\n");
}
talk to u later...

thanks and best regards
reza
Julian Anastasov wrote:

Hello,

To all: do you have some working script(s) that we can
recommend for setups with 2 or 3 uplinks in multipath route? Then we
can link them to the web page as reference.
On Thu, 13 May 2004, Muhammad Reza wrote:

 

now i downgrade to rh-7.2 (2.4.20-w/ julian patch)and iproute version
iproute2-ss010824.
but still cant do multipath routing.
  


Then can you explain what you learned from "2.4 Keeping them alive"
and what you have to keep the state for each GW from the multipath route
valid?
 

this is my trace with ip route get;
[EMAIL PROTECTED] root]# ip route get 202.138.253.17
202.138.253.17 via 172.16.0.1 dev eth0 src 172.16.0.232
cache mtu 1500 advmss 1460
[EMAIL PROTECTED] root]# ip route get 202.138.253.17 from 192.168.0.2
202.138.253.17 from 192.168.0.2 via 192.168.0.1 dev eth1
cache mtu 1500 advmss 1460
[EMAIL PROTECTED] root]# ip route get 202.138.253.17 from 172.16.0.232
202.138.253.17 from 172.16.0.232 via 172.16.0.1 dev eth0
cache mtu 1500 advmss 1460
[EMAIL PROTECTED] root]# ip route list table main
192.168.0.0/30 dev eth1 proto kernel scope link src 192.168.0.2
  


This is strange:

 

172.16.0.0/24 dev eth0 scope link
10.10.10.0/24 dev eth2 scope link
  


It means your settings are not created from script.
Also, the script does not bring dev eth0 up, there is a missing
"up".
 

127.0.0.0/8 dev lo scope link
[EMAIL PROTECTED] root]# ip route list table MRA
default via 172.16.0.1 dev eth0 proto static src 172.16.0.232
prohibit default proto static metric 1
  


What do you have in table AD

Re: [LARTC] HTB MPU

2004-05-14 Thread Andy Furniss
Ed Wildgoose wrote:
Andy Furniss wrote:

Hi.

I wrote in a reply to a mail on here recently that you can't set mpu 
(minimum packet unit) on HTB as you can on CBQ.

I've just noticed that there is a patch on devik's site which does mpu 
and overhead.

http://luxik.cdi.cz/~devik/qos/htb/

For dsl users mpu is, for practical purposes going to be 106 - 
overhead is still variable though, depending on packet size.

Having these should let you push upstream bandwidth rates a bit closer 
to the limit.


What about changing that patch a little (bear in mind I don't understand 
how it works though).
I appears that you could change the patch in tc/core in fn 
tc_calc_rtable, from:

 + if (overhead)
 + sz += overhead;
to something like:

 + if (overhead)
 + sz += (((sz-1)/mpu)+1) * overhead;
Where that little calculation is trying to turn the mpu into a packet 
size, work out how many packets would be required for the size (sz) of 
data, and apply the overhead per packet.  You would then set mpu to be 
the atm packet size, ie 54

To be honest though, this packing of the params into a single var seems 
unneccessary.  The function  tc_calc_rtab is only obviously used in the 
tc code, and it could be easily changed to have a prototype with an 
extra param.  I would have to have a flick through the rest of the code, 
but it might be quite easy to add per packet overhead to the cbq code in 
the same way, and also whatever m_police is?

Can someone with a working setup try this out and see if it helps?
The patch author has mailed with similar suggestion, so there may be 
something new soon. People will need to work out their ppp overhead 
first. I know mine for pppoa/vc mux in the UK - it's 10 (the RFC says 9 
or 10) so it's lucky my modem gives a cell count so I can tell easily.
I don't know how many there are, or what figure other variants use.

Andy.

___
LARTC mailing list / [EMAIL PROTECTED]
http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/


Re: [LARTC] HTB MPU

2004-05-14 Thread Ed Wildgoose
Andy Furniss wrote:

Hi.

I wrote in a reply to a mail on here recently that you can't set mpu 
(minimum packet unit) on HTB as you can on CBQ.

I've just noticed that there is a patch on devik's site which does mpu 
and overhead.

http://luxik.cdi.cz/~devik/qos/htb/

For dsl users mpu is, for practical purposes going to be 106 - 
overhead is still variable though, depending on packet size.

Having these should let you push upstream bandwidth rates a bit closer 
to the limit.


What about changing that patch a little (bear in mind I don't understand 
how it works though). 

I appears that you could change the patch in tc/core in fn 
tc_calc_rtable, from:

 + if (overhead)
 + sz += overhead;
to something like:

 + if (overhead)
 + sz += (((sz-1)/mpu)+1) * overhead;
Where that little calculation is trying to turn the mpu into a packet 
size, work out how many packets would be required for the size (sz) of 
data, and apply the overhead per packet.  You would then set mpu to be 
the atm packet size, ie 54

To be honest though, this packing of the params into a single var seems 
unneccessary.  The function  tc_calc_rtab is only obviously used in the 
tc code, and it could be easily changed to have a prototype with an 
extra param.  I would have to have a flick through the rest of the code, 
but it might be quite easy to add per packet overhead to the cbq code in 
the same way, and also whatever m_police is?

Can someone with a working setup try this out and see if it helps?

Ed W

___
LARTC mailing list / [EMAIL PROTECTED]
http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/