Re: [LARTC] HTB and bursts

2007-05-11 Thread Andy Furniss

Pablo Fernandes Yahoo wrote:


I would like to have the customer using 150kbit stable in a download. But at
the begining of the conection, i would like to have a 200kbit burst.


Depends what you mean - burst is an amount of data not a bitrate. If you
want them (using your setup) to have 25k of data unlimited rate then
burst 25k cburst 25k should do it.

I think that if your class has a different ceil to rate then giving a
burst but not cburst will give them burst bytes capped at ceil rate.

I haven't tested the exact behavior or read all recent posts yet.

Andy.

___
LARTC mailing list
LARTC@mailman.ds9a.nl
http://mailman.ds9a.nl/cgi-bin/mailman/listinfo/lartc


Re: [LARTC] limit bandwidth per host question

2007-05-11 Thread Andy Furniss

Andy Furniss wrote:

tc class add dev eth2 parent 1:0 classid 1:2 htb rate 255kbit burst 
255kbit



Burst is a good idea


Actually you need to specify burst and cburst for it to work and I 
suppose the law doesn't stop you being more generous than 255kbit - I 
just tried 100k (= 100k byte) and browsing isn't too bad.


Browsing and downloading together with just fifo is horrible though. I 
tried htb with the prio qdisc and it was dissapointing WRT latency. HTB 
class prio was far better. In both cases I also had sfq on the leaf of 
the tcp class, which makes browsing while downloading nicer and for tcp 
games didn't hurt latency too much.


I was only testing with one user though I scripted two, I'll get round 
to playing with curl loader one day. There's bound to be a mistake 
somewhere, but I paste below what I did. class/flowids are hex and you 
have 0- after : (minor) to play with - you'll need a more sensible 
numbering system that I chose.


Policing was also not too bad.

Andy.

cat htb-255-eth0-prio-htb

set -x
IP=/sbin/ip
TC=/sbin/tc

$TC qdisc del dev eth0 root &>/dev/null

if [ "$1" = "stop" ]
then
echo "stopping"
exit
fi

$TC qdisc add dev eth0 root handle 1: htb

$TC class add dev eth0 parent 1: classid 1:1 htb rate 255kbit burst 100k 
cburst 100k
$TC class add dev eth0 parent 1:1 classid 1:11 htb prio 0 rate 200kbit 
ceil 255kbit burst 10k cburst 10k

$TC qdisc add dev eth0 parent 1:11 bfifo limit 50k
$TC class add dev eth0 parent 1:1 classid 1:12 htb prio 1 rate 55kbit 
ceil 255kbit burst 90k cburst 90k

$TC qdisc add dev eth0 parent 1:12 sfq limit 30

$TC filter add dev eth0 parent 1: protocol ip prio 1 u32 match ip dst 
192.168.0.3 flowid 1:1
$TC filter add dev eth0 parent 1:1 protocol ip prio 1 u32 match ip 
protocol 6 0xff flowid 1:12
$TC filter add dev eth0 parent 1:1 protocol ip prio 2 u32 match u32 0 0 
flowid 1:11


$TC class add dev eth0 parent 1: classid 1:2 htb rate 255kbit burst 100k 
cburst 100k
$TC class add dev eth0 parent 1:2 classid 1:21 htb prio 0 rate 200kbit 
ceil 255kbit burst 10k cburst 10k

$TC qdisc add dev eth0 parent 1:21 bfifo limit 50k
$TC class add dev eth0 parent 1:2 classid 1:22 htb prio 1 rate 55kbit 
ceil 255kbit burst 90k cburst 90k

$TC qdisc add dev eth0 parent 1:22 sfq limit 30

$TC filter add dev eth0 parent 1: protocol ip prio 1 u32 match ip dst 
192.168.0.99 flowid 1:2
$TC filter add dev eth0 parent 1:2 protocol ip prio 1 u32 match ip 
protocol 6 0xff flowid 1:22
$TC filter add dev eth0 parent 1:2 protocol ip prio 2 u32 match u32 0 0 
flowid 1:21



cat htb-255-eth0-prio


set -x
IP=/sbin/ip
TC=/sbin/tc

$TC qdisc del dev eth0 root &>/dev/null

if [ "$1" = "stop" ]
then
echo "stopping"
exit
fi

$TC qdisc add dev eth0 root handle 1: htb

$TC class add dev eth0 parent 1: classid 1:1 htb rate 255kbit burst 100k 
cburst 100k
$TC qdisc add dev eth0 parent 1:1 handle 2: prio bands 2 priomap  1 1 1 
1 1 1 1 1 1 1 1 1 1 1 1 1

$TC qdisc add dev eth0 parent 2:1 bfifo limit 50k
$TC qdisc add dev eth0 parent 2:2 sfq limit 30

$TC filter add dev eth0 parent 1: protocol ip prio 1 u32 match ip dst 
192.168.0.3 flowid 1:1
$TC filter add dev eth0 parent 2: protocol ip prio 1 u32 match ip 
protocol 6 0xff flowid 2:2
$TC filter add dev eth0 parent 2: protocol ip prio 2 u32 match u32 0 0 
flowid 2:1


$TC class add dev eth0 parent 1: classid 1:2 htb rate 255kbit burst 100k 
cburst 100k
$TC qdisc add dev eth0 parent 1:2 handle 3: prio bands 2 priomap  1 1 1 
1 1 1 1 1 1 1 1 1 1 1 1 1

$TC qdisc add dev eth0 parent 3:1 bfifo limit 50k
$TC qdisc add dev eth0 parent 3:2 sfq limit 30

$TC filter add dev eth0 parent 1: protocol ip prio 1 u32 match ip dst 
192.168.0.99 flowid 1:2
$TC filter add dev eth0 parent 3: protocol ip prio 1 u32 match ip 
protocol 6 0xff flowid 3:2
$TC filter add dev eth0 parent 3: protocol ip prio 2 u32 match u32 0 0 
flowid 3:1



___
LARTC mailing list
LARTC@mailman.ds9a.nl
http://mailman.ds9a.nl/cgi-bin/mailman/listinfo/lartc


Re: [LARTC] limit bandwidth per host question

2007-05-11 Thread Andy Furniss

Andy Furniss wrote:

tc class add dev eth2 parent 1:0 classid 1:2 htb rate 255kbit burst 
255kbit



Burst is a good idea


Actually you need to specify burst and cburst for it to work and I
suppose the law doesn't stop you being more generous than 255kbit - I
just tried 100k (= 100k byte) and browsing isn't too bad.

Browsing and downloading together with just fifo is horrible though. I
tried htb with the prio qdisc and it was dissapointing WRT latency. HTB
class prio was far better. In both cases I also had sfq on the leaf of
the tcp class, which makes browsing while downloading nicer and for tcp
games didn't hurt latency too much.

I was only testing with one user though I scripted two, I'll get round
to playing with curl loader one day. There's bound to be a mistake
somewhere, but I paste below what I did. class/flowids are hex and you
have 0- after : (minor) to play with - you'll need a more sensible
numbering system that I chose.

Policing was also not too bad.

Andy.

cat htb-255-eth0-prio-htb

set -x
IP=/sbin/ip
TC=/sbin/tc

$TC qdisc del dev eth0 root &>/dev/null

if [ "$1" = "stop" ]
then
echo "stopping"
exit
fi

$TC qdisc add dev eth0 root handle 1: htb

$TC class add dev eth0 parent 1: classid 1:1 htb rate 255kbit burst 100k
cburst 100k
$TC class add dev eth0 parent 1:1 classid 1:11 htb prio 0 rate 200kbit
ceil 255kbit burst 10k cburst 10k
$TC qdisc add dev eth0 parent 1:11 bfifo limit 50k
$TC class add dev eth0 parent 1:1 classid 1:12 htb prio 1 rate 55kbit
ceil 255kbit burst 90k cburst 90k
$TC qdisc add dev eth0 parent 1:12 sfq limit 30

$TC filter add dev eth0 parent 1: protocol ip prio 1 u32 match ip dst
192.168.0.3 flowid 1:1
$TC filter add dev eth0 parent 1:1 protocol ip prio 1 u32 match ip
protocol 6 0xff flowid 1:12
$TC filter add dev eth0 parent 1:1 protocol ip prio 2 u32 match u32 0 0
flowid 1:11

$TC class add dev eth0 parent 1: classid 1:2 htb rate 255kbit burst 100k
cburst 100k
$TC class add dev eth0 parent 1:2 classid 1:21 htb prio 0 rate 200kbit
ceil 255kbit burst 10k cburst 10k
$TC qdisc add dev eth0 parent 1:21 bfifo limit 50k
$TC class add dev eth0 parent 1:2 classid 1:22 htb prio 1 rate 55kbit
ceil 255kbit burst 90k cburst 90k
$TC qdisc add dev eth0 parent 1:22 sfq limit 30

$TC filter add dev eth0 parent 1: protocol ip prio 1 u32 match ip dst
192.168.0.99 flowid 1:2
$TC filter add dev eth0 parent 1:2 protocol ip prio 1 u32 match ip
protocol 6 0xff flowid 1:22
$TC filter add dev eth0 parent 1:2 protocol ip prio 2 u32 match u32 0 0
flowid 1:21


cat htb-255-eth0-prio


set -x
IP=/sbin/ip
TC=/sbin/tc

$TC qdisc del dev eth0 root &>/dev/null

if [ "$1" = "stop" ]
then
echo "stopping"
exit
fi

$TC qdisc add dev eth0 root handle 1: htb

$TC class add dev eth0 parent 1: classid 1:1 htb rate 255kbit burst 100k
cburst 100k
$TC qdisc add dev eth0 parent 1:1 handle 2: prio bands 2 priomap  1 1 1
1 1 1 1 1 1 1 1 1 1 1 1 1
$TC qdisc add dev eth0 parent 2:1 bfifo limit 50k
$TC qdisc add dev eth0 parent 2:2 sfq limit 30

$TC filter add dev eth0 parent 1: protocol ip prio 1 u32 match ip dst
192.168.0.3 flowid 1:1
$TC filter add dev eth0 parent 2: protocol ip prio 1 u32 match ip
protocol 6 0xff flowid 2:2
$TC filter add dev eth0 parent 2: protocol ip prio 2 u32 match u32 0 0
flowid 2:1

$TC class add dev eth0 parent 1: classid 1:2 htb rate 255kbit burst 100k
cburst 100k
$TC qdisc add dev eth0 parent 1:2 handle 3: prio bands 2 priomap  1 1 1
1 1 1 1 1 1 1 1 1 1 1 1 1
$TC qdisc add dev eth0 parent 3:1 bfifo limit 50k
$TC qdisc add dev eth0 parent 3:2 sfq limit 30

$TC filter add dev eth0 parent 1: protocol ip prio 1 u32 match ip dst
192.168.0.99 flowid 1:2
$TC filter add dev eth0 parent 3: protocol ip prio 1 u32 match ip
protocol 6 0xff flowid 3:2
$TC filter add dev eth0 parent 3: protocol ip prio 2 u32 match u32 0 0
flowid 3:1



___
LARTC mailing list
LARTC@mailman.ds9a.nl
http://mailman.ds9a.nl/cgi-bin/mailman/listinfo/lartc


Re: [LARTC] PRIO and TBF is much better than HTB??

2007-05-11 Thread Andy Furniss

Simo wrote:


#define UPLOAD 1000kbps


I've never used tcns/sim if that's what this is kbps means k bytes to 
"normal" tc.




  $low = class{ tbf (rate 300kbps, burst 1510B, mtu 1510B, limit
3000B); }


limit 3000B - not even enough for two packets (1500 mtu = 1514 to tc on 
eth), would hurt performance on a real wan.



every 0.0008s send TCP_PCK($tcp_dport=22) 0 x 60

/* 800kbit/s  */


testing with a stream is not very representative of real tcp.



the delay by the combination of PRIO and TBF is much better than by the HTB.
(is it possible that pakets maybe dropped by the combination of PRIO and
TBF, that´s why the latency is so good???)


Yes unless you add leafs with limit htb uses qlen of nic, default 1000p

Andy.
___
LARTC mailing list
LARTC@mailman.ds9a.nl
http://mailman.ds9a.nl/cgi-bin/mailman/listinfo/lartc


[LARTC] HTB and bursts

2007-05-11 Thread Pablo Fernandes Yahoo
Hey,

 

i saw a related question made by another user in this list, but i still do
not understanding how to do it or each values put.

 

I have HTB "rules" in a ISP and i control for each customer this way:

 

Flush and 1:0 class

tc qdisc del dev eth0 root

tc qdisc add dev eth0 root handle 1:0 htb

tc class add dev eth0 parent 1:0 classid 1:1 htb rate 100mbit

tc qdisc del dev eth1 root

tc qdisc add dev eth1 root handle 1:0 htb

tc class add dev eth1 parent 1:0 classid 1:1 htb rate 100mbit

 

Upload and Download: user1

tc class add dev eth0 parent 1:1 classid 1:5 htb rate 150kbit ceil 150kbit

tc qdisc add dev eth0 parent 1:5 handle 5: sfq perturb 10

tc class add dev eth1 parent 1:1 classid 1:5 htb rate 50kbit ceil 50kbit

tc qdisc add dev eth1 parent 1:5 handle 5: sfq perturb 10

iptables -t mangle -A POSTROUTING --dest x.x.x.x -o eth0 -j CLASSIFY
--set-class 1:5

iptables -t mangle -A FORWARD --src x.x.x.x -o eth1 -j CLASSIFY --set-class
1:5

 

Upload and Download: user2

tc class add dev eth0 parent 1:1 classid 1:8 htb rate 150kbit ceil 150kbit

tc qdisc add dev eth0 parent 1:8 handle 8: sfq perturb 10

tc class add dev eth1 parent 1:1 classid 1:8 htb rate 50kbit ceil 50kbit

tc qdisc add dev eth1 parent 1:8 handle 8: sfq perturb 10

iptables -t mangle -A POSTROUTING --dest y.y.y.y -o eth0 -j CLASSIFY
--set-class 1:8

iptables -t mangle -A FORWARD --src y.y.y.y -o eth1 -j CLASSIFY --set-class
1:8

 

(.)

 

I would like to have the customer using 150kbit stable in a download. But at
the begining of the conection, i would like to have a 200kbit burst. This
will help the navigation between web sites in internet, downloading the gifs
and texts during the burst and then have just 150kbit (thinking in a big
download, for example). Is it possible? We have more products (100kbit,
150kbit, 200kbit, 300kbit, 450kbit, 600kbit, 1mbit, 1,5mbit 2mbit.

 

Thank's any help in advance.

 

 

Pablo Fernandes

 

___
LARTC mailing list
LARTC@mailman.ds9a.nl
http://mailman.ds9a.nl/cgi-bin/mailman/listinfo/lartc


RE: [LARTC] PRIO and TBF is much better than HTB??

2007-05-11 Thread Flechsenhaar, Jon J
Just to comment.  Yes you will get better latency with prio and tbf.  However 
there creation purposes were for different end goals.  HTB has the ability to 
create a class structure that can break your link bandwidth up into different 
classes.  The prio setting in HTB is to determine which class will get served 
if there is additional bandwidth.  However all classes will get there 
guaranteed rates.  This fits well into DiffServ.
 
Prio is just priority.  A higher prio class will starve out a lower prio class. 
 There is no guaranteed rates or class structure, only qdiscs.  
 
TBF is purely a rate limitor.  Use it to slow down an interface.  Again no 
class structure.  
 
http://opalsoft.net/qos/DS.htm
 
The above link is a must if your working with QoS on Linux.  
Jon Flechsenhaar 
Boeing WNW Team 
Network Services 
(714)-762-1231 
202-E7 
 



From: Salim S I [mailto:[EMAIL PROTECTED] 
Sent: Thursday, May 10, 2007 11:26 PM
To: lartc@mailman.ds9a.nl
Subject: RE: [LARTC] PRIO and TBF is much better than HTB??


HTB's priority and PRIO qdisc are very different.
 
PRIO qdisc will definitely give better latency for your high priority traffic, 
since the qdisc is designed for the purpose of 'priority'. In theory it will 
even starve the low priority traffic, if high prio traffic is waiting to go out.
 
HTB's priority is different, it only gives relative priority. High prio class 
in a level is de-queued first during the roundrobin/wrr cycle, but lower 
priority classes will also be fairly serviced, unlike PRIO qdisc.
 
 
-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Simo
Sent: Thursday, May 10, 2007 11:13 PM
To: lartc@mailman.ds9a.nl
Subject: [LARTC] PRIO and TBF is much better than HTB??
 
Hello mailing list,
i stand bevor a mystery and cannot explain it :-). I want to do shaping and 
prioritization and I have done these following configurations and simulations. 
I can´t explain, that the combination of PRIO and TBF is much better than the  
HTB (with the prio parameter) alone or  in combination with the SFQ.
Here are my example configurations: 2 Traffic Classes http (80 = 0x50) and ssh 
(22 = 0x16), and in my example, I want to prioritize the http-Traffic:
HTB: the results of the simulation ist here: 
HTB cumulative: http://simo.mix4web.de/up/htb_cumul.jpg
HTB delay: http://simo.mix4web.de/up/htb_delay.jpg
HTB with prio parameter cumulative: 
http://simo.mix4web.de/up/htb_cumul_prio_paramter.jpg
HTB with prio parameter delay: 
http://simo.mix4web.de/up/htb_delay_prio_parameter.jpg
 
#define UPLOAD 1000kbps
dev eth0 1000 {
egress {
class ( <$high> )  if tcp_dport == 80;
class(<$low>) if  tcp_dport == 22;
htb () {
class ( rate UPLOAD, ceil UPLOAD) {
/* with the prio parameter : $high   = class ( rate 700kbps, ceil UPLOAD, prio 
0); */
$high   = class ( rate 700kbps, ceil UPLOAD);
/* with the prio parameter : $low   = class ( rate 300kbps, 
ceil UPLOAD, prio 0); */
$low  = class ( rate 300kbps, ceil UPLOAD, prio 1);
}
}
}
}
 
/* 1Mbit 0.0008 = 100*8/10^6  */
every 0.0008s send TCP_PCK($tcp_dport=22) 0 x 60
/* 800kbit/s  */
every 0.001s send TCP_PCK($tcp_dport=80) 0 x 60
time 2s
 
 
 
 
PRIO and TBF:
PRIO and TBF cumulative: http://simo.mix4web.de/up/prio_tbf_cumul.jpg
PRIO and TBF delay: http://simo.mix4web.de/up/prio_tbf_delay.jpg
 
#define UPLOAD 1000kbps
 
dev eth0 1000 {
egress {
class ( <$high> )  if tcp_dport == 80;
class(<$low>) if  tcp_dport == 22;
prio{
  $high = class{ tbf (rate 700kbps, burst 1510B, mtu 1510B, limit 
3000B);  }
  $low = class{ tbf (rate 300kbps, burst 1510B, mtu 1510B, limit 
3000B); }
 }
}
 
}
 
/* 1Mbit 0.0008 = 100*8/10^6  */
every 0.0008s send TCP_PCK($tcp_dport=22) 0 x 60
/* 800kbit/s  */
every 0.001s send TCP_PCK($tcp_dport=80) 0 x 60
time 2s
 
 
 
the delay by the combination of PRIO and TBF is much better than by the HTB. 
(is it possible that pakets maybe dropped by the combination of PRIO and TBF, 
that´s why the latency is so good???)
 
Have you an idea???
 
thanks
simo
 
-
In a world without walls who needs gates and windows?
 
___
LARTC mailing list
LARTC@mailman.ds9a.nl
http://mailman.ds9a.nl/cgi-bin/mailman/listinfo/lartc


RE: [LARTC] PRIO and TBF is much better than HTB??

2007-05-11 Thread Simo
Hi, 

Thanks a lot for your explanations. J I ´ve looked for an advantage of HTB
opposite the combination PRIO+TBF , because this combination seemed better
to me. But I´ve forgotten ;) that  with HTB the unused Tokens can be
distributed fairly on the other classes, so that the unused Bandwidth can
fairly distributed on the other classes and that is not the case with the
combination PRIO+TBF. That´s why I would prefer to use the HTB.

 

Simo

___
LARTC mailing list
LARTC@mailman.ds9a.nl
http://mailman.ds9a.nl/cgi-bin/mailman/listinfo/lartc


[LARTC] Debian - Vlan - Route problem?

2007-05-11 Thread Jeremy SALMON

Hi,

I'm completly lost with vlan and route configuration on my debian.

This is my architecture :



eth1.401   eth1.2338   eth2
Voice Vlan Public IP   Local  
Network
10.150.11.90   84.16.x.x
192.168.1.1
255.255.255.240255.255.255.128  
255.255.255.0

  |  |  |

BOX

In this box I use :
   - NAT to allow the eth2 client connect to Internet from 84.16.x.x
   - Asterisk. Phones are in the eth2 network, SIP provider are in  
eth1.401


No default gateway in network card.

A simple script to create route and allow NAT and other things...

= SCRIPT ==

# Activate IP Forward
echo 1 > /proc/sys/net/ipv4/ip_forward

# Init Iptables
iptables -F
iptables -t nat -F

# NAT
iptables -t nat -A POSTROUTING -o eth0.2338 -s 192.168.1.0/24 -d!  
10.0.0.0/8 -j SNAT --to 84.16.x.x


# Add route for Internet Traffic
route add default gw 84.16.x.x
# Add route for my SIP provider. Route all traffic to 10.0.0.0
route add -net 10.0.0.0 netmask 255.0.0.0 gw 10.150.11.1

= END OF SCRIPT 

I have a sip phone 192.168.1.200 gateway 192.168.1.1
I have my notebook 192.168.1.100 gateway 192.168.1.1

When I only ping external IP (for example 212.217.0.1) from my  
laptop, everything is ok. eth1.2338 is in use
When I only make a call through SIP provider 10.x.x.x everything is  
ok. eth1.401 is in use


So it seem route are working

But for example when I make a call and during this call I ping  
212.217.0.1 ping lose 95% of packet. And immediately after hangup the  
phone, ping start to work ok


In IPTRAF I see all the ICMP packet sent throught eth1.2338, and all  
the udp phone traffic sent through eth1.401.


But it seem ping don't receive the response, or response arrive to  
the eth1.401


When I ping 212.217.0.1, and during the ping make a call, all the  
incoming udp traffic is lost...


Someone can help me with this configuration ? I'm completely lost.

Thanks in advance,
Jeremy

___
LARTC mailing list
LARTC@mailman.ds9a.nl
http://mailman.ds9a.nl/cgi-bin/mailman/listinfo/lartc


RE: [LARTC] PRIO and TBF is much better than HTB??

2007-05-11 Thread Salim S I
That is why I said 'in theory'. Using PRIO qdisc, I have never been able
to achieve starvation of low priority traffic. I have tested with same
rates for both high and low prio traffic, and did not see high priority
traffic really dominating. Maybe a high rate of high prio traffic
combined with a low rate of low prio traffic will achieve this, I don't
know. 
The cumulative effect you see is more likely due to the errant behavior,
not the intended behavior of PRIO qdisc. I may be wrong here; I am
speaking only from my experience. You make a decision whether to depend
on this unintentional, but very common, behavior or not. Another thing
is, PRIO qdisc lists a known bug: High rate of low priority traffic will
starve High priority traffic. So if all goes according to the known
documentation, 'some' of your traffic will starve under 'some'
condition. :-)
 
But yes, TBF+PRIO is the preferred solution for latency sensitive
applications, like Voice/Video. In such cases, one doesn't care if the
non-realtime traffic is starved or not. The PRIO algorithm is designed
to 'empty' high priority queue first. HTB only ensures that high
priority queue is 'serviced' first. 
HTB has a fair queuing algorithm. It is not really suited for
prioritizing traffic, i.e to give absolute priority. Still, you may take
a look at the wondershaper script, which prioritizes some traffic using
HTB.
 
-Original Message-
From: Simo [mailto:[EMAIL PROTECTED] 
Sent: Friday, May 11, 2007 4:37 PM
To: 'Salim S I'; lartc@mailman.ds9a.nl
Subject: RE: [LARTC] PRIO and TBF is much better than HTB??
 
Hi,
Thanks for your answer.
You are right concerning the PRIO QDisc, but which I did not understand
is that the combination (PRIO+TBF) made a Shaping nearly exactly the
same as with HTB only with better latency. One sees this with the
comparison of the two following illustrations of my simulation: 
HTB with prio parameter cumulative:
http://simo.mix4web.de/up/htb_cumul_prio_paramter.jpg
PRIO and TBF cumulative: http://simo.mix4web.de/up/prio_tbf_cumul.jpg

>
> theory it will even starve the low priority traffic, if high prio
traffic is waiting to go out.
>

In the first illustration you can see that  the low priority traffic
also has been served (nearly exactly the same as with HTB). Because of
the use of PRIO in combination with TBF.
But the latency is much better, if you compares the following
illustrations:
HTB with prio parameter delay:
http://simo.mix4web.de/up/htb_delay_prio_parameter.jpg
PRIO and TBF delay: http://simo.mix4web.de/up/prio_tbf_delay.jpg

I think that the overhead with the HTB algorithm is larger and the
scheduler keeps the packets a little longer in the queues.

Simo
 
___
LARTC mailing list
LARTC@mailman.ds9a.nl
http://mailman.ds9a.nl/cgi-bin/mailman/listinfo/lartc


RE: [LARTC] PRIO and TBF is much better than HTB??

2007-05-11 Thread Simo
Hi,

Thanks for your answer.

You are right concerning the PRIO QDisc, but which I did not understand is
that the combination (PRIO+TBF) made a Shaping nearly exactly the same as
with HTB only with better latency. One sees this with the comparison of the
two following illustrations of my simulation: 
HTB with prio parameter cumulative:
http://simo.mix4web.de/up/htb_cumul_prio_paramter.jpg
PRIO and TBF cumulative: http://simo.mix4web.de/up/prio_tbf_cumul.jpg

>
> theory it will even starve the low priority traffic, if high prio traffic
is waiting to go out.
>


In the first illustration you can see that  the low priority traffic also
has been served (nearly exactly the same as with HTB). Because of the use of
PRIO in combination with TBF.

But the latency is much better, if you compares the following illustrations:

HTB with prio parameter delay:
http://simo.mix4web.de/up/htb_delay_prio_parameter.jpg
PRIO and TBF delay: http://simo.mix4web.de/up/prio_tbf_delay.jpg

I think that the overhead with the HTB algorithm is larger and the scheduler
keeps the packets a little longer in the queues.

Simo



 

___
LARTC mailing list
LARTC@mailman.ds9a.nl
http://mailman.ds9a.nl/cgi-bin/mailman/listinfo/lartc