RE: [LARTC] HTB Rate and Prio (continued)
Actually doing some more tests with all traffic classified can reach 1700 kbits as rate/ceil, at this rate I must put the prio to have some good results. Doing more tests, I didn't know HTB was so sensitive to the max rate/ceil... I'll post later on. ___ LARTC mailing list LARTC@mailman.ds9a.nl http://mailman.ds9a.nl/cgi-bin/mailman/listinfo/lartc
RE: [LARTC] HTB Rate and Prio (continued)
> It depends on what rate you are really synced at and what extra > overheads/encapsulation your sdsl line has. > > It may be a bit different for sdsl - I only know adsl, but as an > example, for me, an empty ack which htb will see as 40 bytes (ignoring > timestamps/sacks) will actually use 2 atm cells = 106 bytes on my line a > 1500 byte ip packet will use 32 cells = 1696 bytes. > > Which means that if I tweaked by experementing with my rate using bulk > traffic and arrived at some figure which seemed OK things could still go > overlimits when the traffic consists of alot of small packets. > > There are patches to work things out per packet and a thesis giving > various overheads at > > http://www.adsl-optimizer.dk/ > > but for now just back off - 1800 is probably OK for netperf tests. > > I am talking about egress here - if you are going to shape ingress > aswell you need to back off even more as you are trying to shape from > the wrong end of the bottleneck (which can't be done perfectly anyway) > to build up queues to shape with. > > Andy. Thanks for all this tips, before I try all you said, I was going to test my line at 500kbits rate/ceil just to put one variable out of the equation, and to my surprise the shaping is working good with 500kbits, I did more tests an dit is working good up to something between 1000 and 1200 kbits, at 1000 it is working, at 1200 the shaping is gone and the problem is back... What I don't understand here, is the fact I can go up to 1800 or higher with iptraf and some netperfs tests... So I'm really lost now, what can I do ?? I can't only shape 1000 and loose 800 kbits, I really need some advice here on what can be done, I'm going mad :) Thanks Gael. ___ LARTC mailing list LARTC@mailman.ds9a.nl http://mailman.ds9a.nl/cgi-bin/mailman/listinfo/lartc
RE: [LARTC] HTB Rate and Prio (continued)
Just tested with 1800kbits as the rate/ceil of the line, with adjustment to all the rate to match the total rate, but I have the same result, bandwith seems to be shared just like if there were no qos in place... I'll do a full round trip of tests today, it must be hidden somewhere :) > -Message d'origine- > De : [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] > De la part de Gael Mauleon > Envoyé : mercredi 13 juillet 2005 12:26 > À : lartc@mailman.ds9a.nl > Objet : RE: [LARTC] HTB Rate and Prio (continued) > > Ok I tested the shaping on the SDSL line with netperf and > an host outside. > > Same script than before, I classify the packets into qdisc based on the > source address in netfilter and here are the result, that's with sfq. > I'm positive on the right traffic going to the right class. > > > TCP STREAM TEST to 81.57.243.113 (NORMAL) > Recv SendSend > Socket Socket Message Elapsed > Size SizeSize Time Throughput > bytes bytes bytessecs.10^3bits/sec > > 87380 8192 819260.00 282.90 > > > > TCP STREAM TEST to 81.57.243.113 (LOWPRIO) > Recv SendSend > Socket Socket Message Elapsed > Size SizeSize Time Throughput > bytes bytes bytessecs.10^3bits/sec > > 87380 8192 819261.00 390.00 > > > In fact the rate are just share sometimes it's the LOWPRIO that has more > Sometimes it's the NORMAL traffic... > > As a note, there were other traffic on the line at the same moment I > didn't > classify but I was expecting the ratio to be kept (LOWPRIO is prio 5, > 50kbits, NORMAL is prio 1 200kbits) > > Well I need to investigate more, I don't understand why it don't work on > the > SDSL line... > > My installation is quite simple : > > Cisco Router <-> Linux Box (QOS) <-> LAN > > And I was testing egress traffic from LAN to internet, but it's the same > for > ingress. > > > Do this can appear if the ratio I put are bigger than the actual line ?? > And again what is a good ratio/ceil for a so sold 2mbits SDSL line ?? > > > Gaël. > > > ___ > LARTC mailing list > LARTC@mailman.ds9a.nl > http://mailman.ds9a.nl/cgi-bin/mailman/listinfo/lartc ___ LARTC mailing list LARTC@mailman.ds9a.nl http://mailman.ds9a.nl/cgi-bin/mailman/listinfo/lartc
RE: [LARTC] HTB Rate and Prio (continued)
Ok I tested the shaping on the SDSL line with netperf and an host outside. Same script than before, I classify the packets into qdisc based on the source address in netfilter and here are the result, that's with sfq. I'm positive on the right traffic going to the right class. TCP STREAM TEST to 81.57.243.113 (NORMAL) Recv SendSend Socket Socket Message Elapsed Size SizeSize Time Throughput bytes bytes bytessecs.10^3bits/sec 87380 8192 819260.00 282.90 TCP STREAM TEST to 81.57.243.113 (LOWPRIO) Recv SendSend Socket Socket Message Elapsed Size SizeSize Time Throughput bytes bytes bytessecs.10^3bits/sec 87380 8192 819261.00 390.00 In fact the rate are just share sometimes it's the LOWPRIO that has more Sometimes it's the NORMAL traffic... As a note, there were other traffic on the line at the same moment I didn't classify but I was expecting the ratio to be kept (LOWPRIO is prio 5, 50kbits, NORMAL is prio 1 200kbits) Well I need to investigate more, I don't understand why it don't work on the SDSL line... My installation is quite simple : Cisco Router <-> Linux Box (QOS) <-> LAN And I was testing egress traffic from LAN to internet, but it's the same for ingress. Do this can appear if the ratio I put are bigger than the actual line ?? And again what is a good ratio/ceil for a so sold 2mbits SDSL line ?? Gaël. ___ LARTC mailing list LARTC@mailman.ds9a.nl http://mailman.ds9a.nl/cgi-bin/mailman/listinfo/lartc
RE: [LARTC] HTB Rate and Prio (continued)
> I had a go with what you posted there over lan and with 2 tcp streams it > behaves as expected (see below for exact test). > > Can you reproduce the failiure shaping over a lan rather than your > internet connection? > > If your upstream bandwidth is sold as 2meg then ceil 2000kbit is likely > to be too high. > > You could also try specifying quantum = 1500 on all the leafs as you get > it auto calculated from rates otherwise (you can see them with tc -s -d > class ls ...). It didn't affect my test though. > > If you are looking at htbs rate counters remember that they use a really > long average (about 100 sec) so they can mislead. > > I tested below with two netperf tcp send tests to ips I added to another > PC on my lan. > > # /usr/local/netperf/netperf -H 192.168.0.60 -f k -l 60 & > /usr/local/netperf/netperf -f k -H 192.168.0.102 -l 60 & > > which gave - > > Recv SendSend > Socket Socket Message Elapsed > Size SizeSize Time Throughput > bytes bytes bytessecs.10^3bits/sec > > 43689 16384 1638460.091884.66 > Recv SendSend > Socket Socket Message Elapsed > Size SizeSize Time Throughput > bytes bytes bytessecs.10^3bits/sec > > 43689 16384 1638460.22 51.58 Did the exact same test and it's working (10kbits for the low prio was the only diff) !! That's with pfifo -> TCP STREAM TEST to 10.0.1.228 Recv SendSend Socket Socket Message Elapsed Size SizeSize Time Throughput bytes bytes bytessecs.10^3bits/sec 87380 8192 819263.00 35.37 TCP STREAM TEST to 10.0.1.227 Recv SendSend Socket Socket Message Elapsed Size SizeSize Time Throughput bytes bytes bytessecs.10^3bits/sec 87380 8192 819260.001897.27 That's with sfq -> TCP STREAM TEST to 10.0.1.227 Recv SendSend Socket Socket Message Elapsed Size SizeSize Time Throughput bytes bytes bytessecs.10^3bits/sec 87380 8192 819260.001918.02 TCP STREAM TEST to 10.0.1.228 Recv SendSend Socket Socket Message Elapsed Size SizeSize Time Throughput bytes bytes bytessecs.10^3bits/sec 87380 8192 819260.00 28.40 10.0.1.228 was the lowprio IP. So everything worked fine for the test...!!! After that I tested with my original script, I just changed the netfilter rules to classify packets with the same rules I used for the netperfs test and I had the same good results... So it must be something with my 2m line or with the traffic type I shape. I quite don't understand the concept of putting the rate of the line lower than it's true value, can you explain me this and do the excess bandwith is lost ? What is a good value for a 2m line (SDSL) ? I'll try tomorrow to have an host outside with netperf so I can test the line itself. Again thanks for your time and help Andy. ___ LARTC mailing list LARTC@mailman.ds9a.nl http://mailman.ds9a.nl/cgi-bin/mailman/listinfo/lartc
[LARTC] HTB Rate and Prio (continued)
Hi again, I keep posting about my problem with HTB -> http://mailman.ds9a.nl/pipermail/lartc/2005q3/016611.html With a bit of search I recently found the exact same problem I have in the 2004 archives with some graphs that explain it far better than I did -> http://mailman.ds9a.nl/pipermail/lartc/2004q4/014519.html and http://mailman.ds9a.nl/pipermail/lartc/2004q4/014568.html Unluckily there were no solution, well or I didn’t find it in the archives, so if anyone have a clue… I upgraded my box to the 2.6.12.2 kernel with the last iproute 2 but nothing change, I still have my shaping problem. Thanks. Gael. ___ LARTC mailing list LARTC@mailman.ds9a.nl http://mailman.ds9a.nl/cgi-bin/mailman/listinfo/lartc
RE: [LARTC] HTB Rate and Prio
Thanks for the anwser, I just tried that with a 10 kbits rate and no Priority but it don't seems to change something in the behavior of the QOS... Well to make it clear here is the full infos (sorry for the flood ) -> class htb 1:101 parent 1:10 leaf 101: prio 0 rate 25bit ceil 2000Kbit burst 1912b cburst 4Kb Sent 266737 bytes 372 pkts (dropped 0, overlimits 0) rate 48440bit 5pps lended: 265 borrowed: 107 giants: 0 tokens: 87000 ctokens: 23695 class htb 1:202 parent 1:20 leaf 202: prio 0 rate 25bit ceil 2000Kbit burst 1912b cburst 4Kb Sent 39266 bytes 325 pkts (dropped 0, overlimits 0) rate 3800bit 3pps lended: 325 borrowed: 0 giants: 0 tokens: 87522 ctokens: 23789 class htb 1:1 root rate 2000Kbit ceil 2000Kbit burst 4Kb cburst 4Kb Sent 43212627 bytes 42882 pkts (dropped 0, overlimits 0) rate 1765Kbit 212pps lended: 28370 borrowed: 0 giants: 0 tokens: -17118 ctokens: -17118 class htb 1:10 parent 1:1 rate 75bit ceil 2000Kbit burst 2536b cburst 4Kb Sent 37117090 bytes 28128 pkts (dropped 0, overlimits 0) rate 1551Kbit 143pps lended: 7561 borrowed: 14583 giants: 0 tokens: -71625 ctokens: -17118 class htb 1:203 parent 1:20 leaf 203: prio 0 rate 25bit ceil 2000Kbit burst 1912b cburst 4Kb Sent 0 bytes 0 pkts (dropped 0, overlimits 0) lended: 0 borrowed: 0 giants: 0 tokens: 89625 ctokens: 24023 class htb 1:103 parent 1:10 leaf 103: prio 0 rate 25bit ceil 2000Kbit burst 1912b cburst 4Kb Sent 36716334 bytes 26167 pkts (dropped 0, overlimits 0) rate 1505Kbit 133pps backlog 4p lended: 4126 borrowed: 22037 giants: 0 tokens: -114281 ctokens: -17118 class htb 1:20 parent 1:1 rate 75bit ceil 2000Kbit burst 2536b cburst 4Kb Sent 39266 bytes 325 pkts (dropped 0, overlimits 0) rate 3800bit 3pps lended: 0 borrowed: 0 giants: 0 tokens: 39016 ctokens: 23789 class htb 1:102 parent 1:10 leaf 102: prio 0 rate 25bit ceil 2000Kbit burst 1912b cburst 4Kb Sent 138883 bytes 1593 pkts (dropped 0, overlimits 0) rate 3176bit 4pps lended: 1593 borrowed: 0 giants: 0 tokens: 87750 ctokens: 23789 class htb 1:201 parent 1:20 leaf 201: prio 0 rate 25bit ceil 2000Kbit burst 1912b cburst 4Kb Sent 0 bytes 0 pkts (dropped 0, overlimits 0) lended: 0 borrowed: 0 giants: 0 tokens: 89625 ctokens: 24023 class htb 1:40 parent 1:1 rate 40bit ceil 2000Kbit burst 2099b cburst 4Kb Sent 0 bytes 0 pkts (dropped 0, overlimits 0) lended: 0 borrowed: 0 giants: 0 tokens: 61523 ctokens: 24023 class htb 1:50 parent 1:1 leaf 50: prio 0 rate 9bit ceil 2000Kbit burst 1711b cburst 4Kb Sent 3516 bytes 55 pkts (dropped 0, overlimits 0) rate 144bit lended: 55 borrowed: 0 giants: 0 tokens: 214583 ctokens: 23648 class htb 1:402 parent 1:40 leaf 402: prio 0 rate 20bit ceil 2000Kbit burst 1849b cburst 4Kb Sent 0 bytes 0 pkts (dropped 0, overlimits 0) lended: 0 borrowed: 0 giants: 0 tokens: 108398 ctokens: 24023 class htb 1:60 parent 1:1 leaf 60: prio 0 rate 1bit ceil 2000Kbit burst 1611b cburst 4Kb Sent 6052835 bytes 14376 pkts (dropped 0, overlimits 0) rate 210184bit 65pps backlog 2p lended: 587 borrowed: 13787 giants: 0 tokens: -2173243 ctokens: 8339 class htb 1:403 parent 1:40 leaf 403: prio 0 rate 20bit ceil 2000Kbit burst 1849b cburst 4Kb Sent 0 bytes 0 pkts (dropped 0, overlimits 0) lended: 0 borrowed: 0 giants: 0 tokens: 108398 ctokens: 24023 ## qdisc htb 1: dev imq1 r2q 10 default 1000 direct_packets_stat 11 Sent 46576898 bytes 49981 pkts (dropped 0, overlimits 33216) qdisc sfq 101: dev imq1 parent 1:101 limit 128p quantum 1500b perturb 10sec Sent 295953 bytes 742 pkts (dropped 0, overlimits 0) qdisc sfq 102: dev imq1 parent 1:102 limit 128p quantum 1500b perturb 10sec Sent 163583 bytes 1984 pkts (dropped 0, overlimits 0) qdisc sfq 103: dev imq1 parent 1:103 limit 128p quantum 1500b perturb 10sec Sent 38352553 bytes 27341 pkts (dropped 0, overlimits 0) qdisc sfq 201: dev imq1 parent 1:201 limit 128p quantum 1500b perturb 10sec Sent 0 bytes 0 pkts (dropped 0, overlimits 0) qdisc sfq 202: dev imq1 parent 1:202 limit 128p quantum 1500b perturb 10sec Sent 51981 bytes 415 pkts (dropped 0, overlimits 0) qdisc sfq 203: dev imq1 parent 1:203 limit 128p quantum 1500b perturb 10sec Sent 47507 bytes 904 pkts (dropped 0, overlimits 0) qdisc sfq 403: dev imq1 parent 1:403 limit 128p quantum 1500b perturb 10sec Sent 0 bytes 0 pkts (dropped 0, overlimits 0) qdisc sfq 402: dev imq1 parent 1:402 limit 128p quantum 1500b perturb 10sec Sent 0 bytes 0 pkts (dropped 0, overlimits 0) qdisc sfq 50: dev imq1 parent 1:50 limit 128p quantum 1500b perturb 10sec Sent 4089 bytes 64 pkts (dropped 0, overlimits 0) qdisc sfq 60: dev imq1 parent 1:60 limit 128p quantum 1500b perturb 10sec Sent 7659072 bytes 18520 pkts (dropped 0, overlimits 0) ## I changed pfifo queue to sfq ones, it seems i have better results with them, And clear the priorities but none changed... So the interesting things here are the 1:60
[LARTC] HTB Rate and Prio
Hi, I wanted to implement some QOS on my Linux Box with HTB, but after some time spend on the configuration and tests, I still don’t manage to have some correct results. Here are the details : -ROOT 2000 kbits -HIGHPRIO SUBCLASS 50 kbits prio 0 -SUBCLASS1 750 kbits prio 1 -SERVICE1 250 kbits prio 1 -SERVICE2 250 kbits prio 1 -SERVICE3 250 kbits prio 1 -SUBCLASS2 750 kbits prio 1 -SERVICE1 250 kbits prio 1 -SERVICE2 250 kbits prio 1 -SERVICE3 250 kbits prio 1 -SUBCLASS3 400 kbits prio 1 -SERVICE1 200 kbits prio 1 -SERVICE2 200 kbits prio 1 -LOWPRIO SUBCLASS 50 kbits prio 5 Here is the details of the implementation, I only wrote 1 on the subclass Cause they are all on the same template. tc qdisc add dev $QOSIN root handle 1:0 htb default 1000 tc class add dev $QOSIN parent 1:0 classid 1:1 htb rate 2000kbit ### SUBCLASS1 tc class add dev $QOSIN parent 1:1 classid 1:10 htb rate 750kbit ceil 2000kbit prio 1 tc class add dev $QOSIN parent 1:10 classid 1:101 htb rate 250kbit ceil 2000kbit prio 1 tc qdisc add dev $QOSIN parent 1:101 handle 101: pfifo limit 10 tc class add dev $QOSIN parent 1:10 classid 1:102 htb rate 250kbit ceil 2000kbit prio 1 tc qdisc add dev $QOSIN parent 1:102 handle 102: pfifo limit 10 tc class add dev $QOSIN parent 1:10 classid 1:103 htb rate 250kbit ceil 2000kbit prio 1 tc qdisc add dev $QOSIN parent 1:103 handle 103: pfifo limit 10 tc filter add dev $QOSIN parent 1:0 protocol ip handle $OUTPROD$MAIL fw flowid 1:101 tc filter add dev $QOSIN parent 1:0 protocol ip handle $OUTPROD$HTTP fw flowid 1:102 tc filter add dev $QOSIN parent 1:0 protocol ip handle $OUTPROD$FTP fw flowid 1:103 etc… ### HIGH PRIO ### tc class add dev $QOSIN parent 1:1 classid 1:50 htb rate 50kbit ceil 2000kbit prio 0 quantum 1500 tc qdisc add dev $QOSIN parent 1:50 handle 50: pfifo limit 10 tc filter add dev $QOSIN parent 1:0 protocol ip handle $OUTPROD$HIGHPRIO fw flowid 1:50 tc filter add dev $QOSIN parent 1:0 protocol ip handle $OUTPOSTPROD$HIGHPRIO fw flowid 1:50 tc filter add dev $QOSIN parent 1:0 protocol ip handle $OUTDMZ$HIGHPRIO fw flowid 1:50 ### LOW PRIO ### tc class add dev $QOSIN parent 1:1 classid 1:60 htb rate 50kbit ceil 2000kbit prio 5 quantum 1500 tc qdisc add dev $QOSIN parent 1:60 handle 60: pfifo limit 10 tc filter add dev $QOSIN parent 1:0 protocol ip handle $OUTPROD$LOWPRIO fw flowid 1:60 tc filter add dev $QOSIN parent 1:0 protocol ip handle $OUTPOSTPROD$LOWPRIO fw flowid 1:60 tc filter add dev $QOSIN parent 1:0 protocol ip handle $OUTDMZ$LOWPRIO fw flowid 1:60 Every traffic seems to go in the class it must go, the stats are good and if I change any of the ceil rate the associated traffic is caped to the right ceil I enter. Now with this configuration I expected that when one of the SUBCLASS class or SERVICE want more bandwith than its rate, she can borrow it from root and she had it before LOW PRIO and after HIGH PRIO. But it don’t work at all, for exemple I tried only with 2 flow, I have 500 Kbits of LOW PRIO traffic that is currently going on, then I fire some SERVICE1 traffic from SUBCLASS1 that can theorically take 2000 kbits, and instead of taking it from LOW PRIO, it just take what is left… I surely miss something… Thanks for your help and don’t hesitate to ask more infos J Gael. ___ LARTC mailing list LARTC@mailman.ds9a.nl http://mailman.ds9a.nl/cgi-bin/mailman/listinfo/lartc