Re: [LARTC] HTB+SFQ

2007-04-27 Thread Alejandro Ramos Encinosa
On Thursday 26 April 2007 19:34, terraja-based wrote:
 Hi folks,
Hi!

Hola!

 I`ve a problem to use HTB and SFQ.
 The first script, below, to show a simple configuration, does work
 fine...!!!
 But, in the second example, does not work, becouse i put more code to
 clasify the traffic by protocol, http and ftp in this case.
 Somebody can tell me the errors?
 Thx, in advance.-

 NOTICE: IMQ device is to asociate with ETH1 my external iface.

 SCRIPT que funciona:

 
 #!/bin/sh

 ifconfig imq0 up
 tc qdisc add dev imq0 handle 1: root htb default 1
 tc class add dev imq0 parent 1: classid 1:1 htb rate 500kbit ceil 2000kbit
 tc qdisc add dev imq0 parent 1:1 handle 2 sfq

 iptables -t mangle -A PREROUTING -i eth1 -j IMQ --todev 0
 tc filter add dev imq0 parent 1: prio 0 protocol ip handle 2 fw flowid 1:1
 
...could you tell me why do you filter by mark 2? Are you trying to match the 
unmatched packets for iptables?

...¿me prodría decir por qué está tratando de filtrar por la marca 2? ¿Acaso 
está tratando de redirigir los paquetes que iptables no haya sido capaz de 
clasificar?

 SCRIPT que NO funciona:

 
 #!/bin/sh

 ifconfig imq0 up
 tc qdisc add dev imq0 handle 1: root htb default 1
 tc class add dev imq0 parent 1: classid 1:1 htb rate 500kbit ceil 2000kbit

 tc class add dev imq0 parent 1:1 classid 1:10 htb rate 100kbit ceil
 2000kbit tc class add dev imq0 parent 1:1 classid 1:20 htb rate 100kbit
 ceil 2000kbit

 tc qdisc add dev imq0 parent 1:10 handle 2 sfq
 tc qdisc add dev imq0 parent 1:20 handle 3 sfq

 iptables -t mangle -A PREROUTING -i eth1 -j IMQ --todev 0
 tc filter add dev imq0 parent 1:1 prio 0 protocol ip handle 2 fw flowid
 1:10 tc filter add dev imq0 parent 1:1 prio 1 protocol ip handle 3 fw
 flowid 1:20 
Hmm, do you really want these filters as children of 1:1 (root child) instead 
of 1: (root)? If you put these filters as children of 1:1 the traffic will 
not go through the tc tree: you need to redirect the packets falling into the 
root to any child.

Hmm, ¿de veras que quiere que estos filtros sean hijos de 1:1 (hijo de la 
raíz) en vez de hijos de 1: (la raíz)? Si estos filtros se quedan como hijos 
de 1:1 el tráfico de paquetes no fluirá por el árbol de tc: necesita 
redirigir los paquetes que caen en la raíz para alguno de los nodos hijos.

 Ya luego, con el segundo script deberia agregar al final las MARKs de
 IPTABLES, pero no lo hice porque ni siquera cuando hago un SHOW de las
 qdisc (tc qdisc show) me muestra el trafico clasificado, es decir...luego
 yo iba a mandar el trafico de la class 1:10 para el protocolo HTTP y la
 1:20 para FTP, y eso se hace justamente con IPTABLES, pero repito no lo
 hice porque no veo el trafico desglozado previamente cuando trafico, usando
 los 2 potocolos, en la qdisc.
 Esa es la cuestion, no logro clasificar el trafico para luego marcarlo. Ahi
 esta el Kì del asunto como decian las viejas...
 Any ideas?
For any reason, when you redirect packets by 'default' to any child, those 
redirected packets seem to go directly to the attached qdisc, so, filters with 
the default class as parent will not work. I recomend you something like the 
rules bellow:

Por alguna razón, cuando los paquetes son redirigidos 'por defecto', 
aparentemente pasan directamente al qdisc asociado a la clase en cuestión, 
por tanto, los filtros asociados que tienen a dicha clase como padre no 
funcionarán. Yo recomendaría una configuración como la que sigue:

-8---8--
tc qdisc add dev imq0 handle 1: root htb default 30
tc class add dev imq0 parent 1: classid 1:1 htb rate 500kbit ceil 2000kbit

tc class add dev imq0 parent 1:1 classid 1:10 htb rate 100kbit ceil 2000kbit 
tc class add dev imq0 parent 1:1 classid 1:20 htb rate 100kbit ceil 2000kbit 
tc class add dev imq0 parent 1:1 classid 1:30 htb rate 300kbit ceil 2000kbit

tc qdisc add dev imq0 parent 1:10 handle 2 sfq
tc qdisc add dev imq0 parent 1:20 handle 3 sfq

iptables -t mangle -A PREROUTING -i eth1 -j IMQ --todev 0
tc filter add dev imq0 parent 1: prio 0 protocol ip handle 2 fw flowid 1:10
tc filter add dev imq0 parent 1: prio 1 protocol ip handle 3 fw flowid 1:20
-8---8--

 De mas esta decir que IPTABLES, IPROUTE y el KERNEL estan correctamente
 parcheados y actualizados, ya que sino ni siquiera levanta los modulos o
 daria error.-

PS: by the way, I guess you need to change your 1:1 htb class parameters to 
match your real bandwith limitations of the device (eth1 in this case).

Nota: me parece que sería más adecuado asignarle a 1:1 las verdaderas 
restricciones de ancho de banda del dispositivo (eth1 en este caso).
-- 
Alejandro Ramos Encinosa [EMAIL PROTECTED]
Fac. Matemática Computación

[LARTC] HTB+SFQ

2007-04-26 Thread terraja-based

Hi folks,

I`ve a problem to use HTB and SFQ.
The first script, below, to show a simple configuration, does work
fine...!!!
But, in the second example, does not work, becouse i put more code to
clasify the traffic by protocol, http and ftp in this case.
Somebody can tell me the errors?
Thx, in advance.-

NOTICE: IMQ device is to asociate with ETH1 my external iface.

SCRIPT que funciona:


#!/bin/sh

ifconfig imq0 up
tc qdisc add dev imq0 handle 1: root htb default 1
tc class add dev imq0 parent 1: classid 1:1 htb rate 500kbit ceil 2000kbit
tc qdisc add dev imq0 parent 1:1 handle 2 sfq

iptables -t mangle -A PREROUTING -i eth1 -j IMQ --todev 0
tc filter add dev imq0 parent 1: prio 0 protocol ip handle 2 fw flowid 1:1



SCRIPT que NO funciona:


#!/bin/sh

ifconfig imq0 up
tc qdisc add dev imq0 handle 1: root htb default 1
tc class add dev imq0 parent 1: classid 1:1 htb rate 500kbit ceil 2000kbit

tc class add dev imq0 parent 1:1 classid 1:10 htb rate 100kbit ceil 2000kbit
tc class add dev imq0 parent 1:1 classid 1:20 htb rate 100kbit ceil 2000kbit

tc qdisc add dev imq0 parent 1:10 handle 2 sfq
tc qdisc add dev imq0 parent 1:20 handle 3 sfq

iptables -t mangle -A PREROUTING -i eth1 -j IMQ --todev 0
tc filter add dev imq0 parent 1:1 prio 0 protocol ip handle 2 fw flowid 1:10
tc filter add dev imq0 parent 1:1 prio 1 protocol ip handle 3 fw flowid 1:20



Ya luego, con el segundo script deberia agregar al final las MARKs de
IPTABLES, pero no lo hice porque ni siquera cuando hago un SHOW de las qdisc
(tc qdisc show) me muestra el trafico clasificado, es decir...luego yo iba a
mandar el trafico de la class 1:10 para el protocolo HTTP y la 1:20 para
FTP, y eso se hace justamente con IPTABLES, pero repito no lo hice porque no
veo el trafico desglozado previamente cuando trafico, usando los 2
potocolos, en la qdisc.
Esa es la cuestion, no logro clasificar el trafico para luego marcarlo. Ahi
esta el Kì del asunto como decian las viejas...
Any ideas?


De mas esta decir que IPTABLES, IPROUTE y el KERNEL estan correctamente
parcheados y actualizados, ya que sino ni siquiera levanta los modulos o
daria error.-


--
terraja-based
___
LARTC mailing list
LARTC@mailman.ds9a.nl
http://mailman.ds9a.nl/cgi-bin/mailman/listinfo/lartc


Re: [LARTC] HTB/SFQ dequeueing in pairs

2004-01-27 Thread Andy Furniss
I got this reply from don  would rather answer on list so more people 
have a chance to correct any of my misconceptions :-)

[this message off list - feel free to forward it, but leave out my address]

  I wanted to see where from a slot the packets got dropped when the queue
  was full. (e)sfq drops from the longest slot to make space for an
  incoming packet, so it's not tail drop as such, but the results show me
  it does drop from the tail of the slot - which if you are trying to
  shape inbound, is a PITA as tcp slow start grows exponentially and
What's PITA ?
Pain in the arse.

  overflows into my ISP/telecos buffer, causing a latency bump. I think it
  would be alot nicer if It head dropped to make the sender go into
  congestion control quicker.
The fact that the queue grows means that the packets are delayed, and
that's supposed to influence the speed of tcp.
Yes but as I understsnd it during slow start the senders cwin doubles 
per rtt and doesn't stop until it's sent enough to fill my advertised 
window (which linux grows to 32k quite quickly) or a packet is lost and 
three dup acks are recieved, at which time it goes into congestion 
controll and shrinks it's cwin.

Head drop seems absurd, since most of the packets behind the dropped
packet will be wasted - the tcp on the other side will only keep a few
packets past the one that's missing.
I think the opposite is the case, the fact the packet is tail dropped 
means I don't start sending dups for the time it takes to get to the 
head of the queue. The sender meanwhile is transmitting alot of packets, 
most of which I drop after they have already used up some of my bandwidth.

  I noticed that the packets were being released in pairs, which probably
  doesn't help either.
I don't see that it should hurt.
The sender during slow start is increasing exponentally per ack 
recieved, it would be nicer to space them out.

How big are the packets?  Are there other packets in other buckets or
in other queues?  Also how are the packets being generated?
I'd expect for something like ftp where you generate a steady stream
of large packets, they would be released one at a time, since your
quantum is approx the size of one large packet.
On the other hand if you generate two small packets at a time then
maybe the queue is not the bottleneck.
It could also be something in the device driver.
You can probably solve this problem by adding printk's to tell you
when various things happen.
This was a test - the packets are big and there is no other traffic. I 
am in the early days of experimenting. In real use I would be using 
something based on alexander clouters jdg-script with his RED settings - 
 but even if I throttle to 65% down, with my low bandwidth, running 
a bittorrent - or just browsing heavy jpg sites will baulk my latency 
too much to play half life. Though most users may be quite happy with 
the results. Whatever queue I use for downstream is having to live 
behind a fifo whose bandwidth isn't that much more than what I would 
like to shape to, so may not behave as the text book says. If I had 2M 
down, I would not have a problem - what is a 300ms bump would only be 
50ms and I could live with that.

Andy.

___
LARTC mailing list / [EMAIL PROTECTED]
http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/


[LARTC] HTB/SFQ dequeueing in pairs

2004-01-26 Thread Andy Furniss
I set up a little test to see what the behaviour of (e)sfq was - because 
I couldn't work it out from the source :-) .

I wanted to see where from a slot the packets got dropped when the queue 
was full. (e)sfq drops from the longest slot to make space for an 
incoming packet, so it's not tail drop as such, but the results show me 
it does drop from the tail of the slot - which if you are trying to 
shape inbound, is a PITA as tcp slow start grows exponentially and 
overflows into my ISP/telecos buffer, causing a latency bump. I think it 
would be alot nicer if It head dropped to make the sender go into 
congestion control quicker.

However this is not the reason for this post. I tested by capturing with 
tcpdump before and after the queue.
I noticed that the packets were being released in pairs, which probably 
doesn't help either.
I assume it is htb that calls esfq to dequeue a packet - but I don't know.

For the test my DWIFLIMIT bandwidth was set at 51kbit/s which is 10% of 
my bandwidth.
My mtu is set at 1478 as it's slightly more efficient for adsl using 
pppoa/vcmux in the UK.

I used -

$TC class add dev $DWIF parent 1:2 classid 1:21 htb rate 
$[$DWIFLIMIT/2]kbit \
ceil ${DWIFLIMIT}kbit burst 0b cburst 0b mtu 1478 quantum 1478 
prio 1

$TC qdisc add dev $DWIF parent 1:21 handle 21: esfq perturb 0 hash 
classic limit 10

This is part of tc -s -d class show dev imq1

class htb 1:21 parent 1:2 leaf 21: prio 1 quantum 1478 rate 25Kbit ceil 
51Kbit burst 1507b/8 mpu 0b cburst 1540b/8 mpu 0b level 0

Is there anything obvious here that would cause the packets to dequeue 
in pairs.

TIA

Andy.

___
LARTC mailing list / [EMAIL PROTECTED]
http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/


[LARTC] htb, sfq question

2003-06-26 Thread Ruslan Spivak
Hello.

I have small question: there is one leaf class(htb), with qdisc attached 
to it with 'sfq'. I mark packets(using iptables) with different source 
ips with mark 1 to send them in above class. Can you tell me: will they 
fairly divide that class's rate?

Thanks in advance.

Best regards,
Ruslan
___
LARTC mailing list / [EMAIL PROTECTED]
http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/