Re: [LARTC] 2 Questions on filtering incoming stuff
Damion de Soto wrote: Hi Ed, First is: Can I prioritise my "drops" on incoming traffic when the link is overloaded. ie instead of just tail dropping, can I "prefer" to drop certain classes of traffic? If so, do I do this by setting up, say, a HTB tree like on the incoming, but the only action at the leaf is to drop? You can't set up a HTB or any classful qdiscs on incoming traffic, you can only create ingress policer filters. You can setup different filters with different priorities, to try and drop one particular type of traffic moreso than others. Thanks, this is helpful. Thinking about it though, the different filters priorities isn't going to help too much? eg if I want to accept ACK's, then incoming SMTP, then other bulk downloads, then of course I can setup prioritised "bands" by limiting some stuff more than others. But I don't think that a simple priority system will let me accept up to full bandwidth of each, but dropping in a preferential order? (Or do you think simply matching each with a 200Kb/s filter in priority order from highest to lowest will do the trick?) If you're using a linux gateway onto your lan, then you can use a HTB qdiscs on the outgoing (lan) interface which would do a better job. Sure. Same problem for local traffic on that machine though. However, can you apply filters to aliased IP addresses, ie the virtual interfaces eth0:1? Do the filters only apply to the real interfaces (which I think is true of iptables for example?) This might also be useful for setting up a bandwidth filter PC using only a single net card for example (assuming you don't worry about people bypassing it manually) Thanks Ed W ___ LARTC mailing list / [EMAIL PROTECTED] http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/
Re: [LARTC] HTB, MPU, and suitable values
Jason Boxman wrote: On Monday 17 May 2004 17:23, Ed Wildgoose wrote: Read the follows to that post as well. Basically it's only an approximation. The "MPU" is basically pointing out that your ADSL stream is encapsulated in an ATM stream. ATM uses fixed size 64 byte packets. You need at least 2 of these, hence the 108 figure for MPU. Now you also need to estimate overhead which is going to be the size of the header on those ATM packets. Now I'm confused. Is it 53 bytes or 64 bytes? http://www.faqs.org/docs/Linux-HOWTO/ADSL-Bandwidth-Management-HOWTO.html You are right. Something happened and I somehow failed to divide 106 by 2 and get 53... I have been doing a load of code using 2^n all day, and 32/64, etc were really on my mind just then. Sorry However, that still leaves the "wasted space" on the end of small packets (eg those that take up 2.5 ATM cells, how much does the 0.5 take up). I suggested a crude way to tweak that patch (easy to see how it works if you look at the relevant lines in the orig file). However, I dont even have a working QOS system so I haven't even compiled it! Look up the specs for ATM though and you should be able to tweak that suggested line change and get something. So the patch is supposed to increase the cost of dequeuing packets, then, provided you know what numbers to use? Well, I haven't taken the time to trace that code, but with a 10 sec look at it, it appears to be simply accumulating the size of incoming packets based on the actual size of the data. So I simply suggested dividing by 53, rounding up, then adding on the "overhead" on a per packet basis, rather than a per data block basis Actually having looked at your ADSL HOWTO link, of course the best calculation would be to simply divide the amount of data by 48 (the data size of ATM packets). Then round up (since 0.5 packets means needing 1 whole packet). Then multiply this number by 53 (size of atm packet including its header). This would give the exact amount of bandwidth. I would code this as: size = ( (int)((datasize-1)/48) + 1) * 53 You could hardcode something similar into your tc and see if it helps (just remove PMU and overhead code added by the existing patch). If you are scared of looking at code. Don't be. It really isn't as scary as it might look! Good luck. Interested to hear if it works... Ed W ___ LARTC mailing list / [EMAIL PROTECTED] http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/
Re: [LARTC] 2 Questions on filtering incoming stuff
Am Monday 17 May 2004 23:31 schrieb Ed Wildgoose: > First is: If so, do I do this by setting up, say, a HTB tree like on the > incoming, but the only action at the leaf is to drop? Probably with IMQ. Another way would by modifying your ingress filters, so that they don't match packets you don't want to drop in any case. However, this introduces new complications (for example, rate limiting works only for packets the filter actually matches). > There appears to be no way to tweak "window length" to throttle incoming > data... Dunno. I'd like to hear all about it if there is. Andreas ___ LARTC mailing list / [EMAIL PROTECTED] http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/
Re: [LARTC] 2 Questions on filtering incoming stuff
Hi Ed, First is: Can I prioritise my "drops" on incoming traffic when the link is overloaded. ie instead of just tail dropping, can I "prefer" to drop certain classes of traffic? If so, do I do this by setting up, say, a HTB tree like on the incoming, but the only action at the leaf is to drop? You can't set up a HTB or any classful qdiscs on incoming traffic, you can only create ingress policer filters. You can setup different filters with different priorities, to try and drop one particular type of traffic moreso than others. If you're using a linux gateway onto your lan, then you can use a HTB qdiscs on the outgoing (lan) interface which would do a better job. regards -- ~~~ Damion de Soto - Software Engineer email: [EMAIL PROTECTED] SnapGear - A CyberGuard Company ---ph: +61 7 3435 2809 | Custom Embedded Solutions fax: +61 7 3891 3630 | and Security Appliancesweb: http://www.snapgear.com ~~~ --- Free Embedded Linux Distro at http://www.snapgear.org --- ___ LARTC mailing list / [EMAIL PROTECTED] http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/
Re: [LARTC] HTB MPU
Andy Furniss wrote: You can make HTB more accurate by setting HTB_HYSTERESIS to 0 in net/sched/sch_htb.c. To save time - if you built HTB as a module, you can probably (well it worked for me) get away with editing htb.c and do make SUBDIRS=net/sched modules and replacing /lib/modules/[kversion]/kernel/net/sched/htb.o with the new htb.o from your source tree. If you are doing it live stop shaping and check with lsmod that modprobe -r gets rid (do it again if it's still there) of the old htb.o and reload shaping scripts. Oops the htb.c or o should read sch_htb.c or o Andy. ___ LARTC mailing list / [EMAIL PROTECTED] http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/
Re: [LARTC] 2 Questions on filtering incoming stuff
Ed Wildgoose wrote: Two easy questions after having read the LARTC HOWTO document (which by the way is a *fantastic* document. Congratulations to all who contributed!) First is: Can I prioritise my "drops" on incoming traffic when the link is overloaded. ie instead of just tail dropping, can I "prefer" to drop certain classes of traffic? Yes - you would queue before dropping. If so, do I do this by setting up, say, a HTB tree like on the incoming, but the only action at the leaf is to drop? You attach a queue to the leaf, which may drop. Second: Theoretically now There appears to be no way to tweak "window length" to throttle incoming data... But in theory, would a module which delayed the outgoing ACK's have the same effect? Queueing has much the same effect. Obviously this module would need some sort of packet accounting ffrom the incoming interface in order to supply the outgoing filter with the info it needed (not even sure if this design makes it hard to implement such a thing?). Fast and selective acks and timeouts, obviously make this very hard to implement as well... Throttling by rwin manipulation would be nice - but complicated. However, the main point is that I don't really understand the process by which linux (and other OSs) discover the steady state speed of a link? Anyone got a good pointer to how "slow start" and "fast start" work, and how it adjusts speed through time? It's tcp that finds the limits. There are many docs out there like www.jessingale.dsl.pipex.com/tcp-noureddine02transmission.pdf Andy. ___ LARTC mailing list / [EMAIL PROTECTED] http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/
Re: [LARTC] HTB MPU
Jason Boxman wrote: On Friday 14 May 2004 05:55, Andy Furniss wrote: If you can get a cell count from your modem you can work it out with ping. I don't know what your pppoe is. I can probably get my USB Stringray out of the closet and hook it up. I think the Windows diagnostic utility for it actually included some stuff about frames and cell sizes. The browser based diagnostics on my Westell don't include anything interesting. Your upstream worst case depends on your bitrate and your MTU. If it's 128k you add about 90ms, 256k 45ms for 1500b packets. What's yours? My upstream is supposedly 256Kbps. I am running the ADSL modem in pass-through mode, so it gives my Linux router the live IP. When I did PPPoE internally I had an MTU of 1492 and used the RP PPPoE daemon. Could be this then - You can make HTB more accurate by setting HTB_HYSTERESIS to 0 in net/sched/sch_htb.c. To save time - if you built HTB as a module, you can probably (well it worked for me) get away with editing htb.c and do make SUBDIRS=net/sched modules and replacing /lib/modules/[kversion]/kernel/net/sched/htb.o with the new htb.o from your source tree. If you are doing it live stop shaping and check with lsmod that modprobe -r gets rid (do it again if it's still there) of the old htb.o and reload shaping scripts. Andy. ___ LARTC mailing list / [EMAIL PROTECTED] http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/
Re: [LARTC] HTB, MPU, and suitable values
On Monday 17 May 2004 17:23, Ed Wildgoose wrote: > Read the follows to that post as well. Basically it's only an > approximation. The "MPU" is basically pointing out that your ADSL > stream is encapsulated in an ATM stream. ATM uses fixed size 64 byte > packets. You need at least 2 of these, hence the 108 figure for MPU. > Now you also need to estimate overhead which is going to be the size of > the header on those ATM packets. Now I'm confused. Is it 53 bytes or 64 bytes? http://www.faqs.org/docs/Linux-HOWTO/ADSL-Bandwidth-Management-HOWTO.html > However, that still leaves the "wasted space" on the end of small > packets (eg those that take up 2.5 ATM cells, how much does the 0.5 take > up). > > I suggested a crude way to tweak that patch (easy to see how it works if > you look at the relevant lines in the orig file). However, I dont even > have a working QOS system so I haven't even compiled it! Look up the > specs for ATM though and you should be able to tweak that suggested line > change and get something. So the patch is supposed to increase the cost of dequeuing packets, then, provided you know what numbers to use? > I for one would be really interested to hear if it solves the problem! > > Ed W ___ LARTC mailing list / [EMAIL PROTECTED] http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/
[LARTC] 2 Questions on filtering incoming stuff
Two easy questions after having read the LARTC HOWTO document (which by the way is a *fantastic* document. Congratulations to all who contributed!) First is: Can I prioritise my "drops" on incoming traffic when the link is overloaded. ie instead of just tail dropping, can I "prefer" to drop certain classes of traffic? If so, do I do this by setting up, say, a HTB tree like on the incoming, but the only action at the leaf is to drop? Second: Theoretically now There appears to be no way to tweak "window length" to throttle incoming data... But in theory, would a module which delayed the outgoing ACK's have the same effect? Obviously this module would need some sort of packet accounting ffrom the incoming interface in order to supply the outgoing filter with the info it needed (not even sure if this design makes it hard to implement such a thing?). Fast and selective acks and timeouts, obviously make this very hard to implement as well... However, the main point is that I don't really understand the process by which linux (and other OSs) discover the steady state speed of a link? Anyone got a good pointer to how "slow start" and "fast start" work, and how it adjusts speed through time? Thanks Ed W ___ LARTC mailing list / [EMAIL PROTECTED] http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/
Re: [LARTC] HTB, MPU, and suitable values
I imagine that 106 value is a reference to this post: http://mailman.ds9a.nl/pipermail/lartc/2004q2/012369.html ... It's my suspicion that the MPU and overhead options for HTB would assist in resolving this and enable me to resume using 190kbit instead of 160kbit for the outer most parent class. Is my suspicion correct? Read the follows to that post as well. Basically it's only an approximation. The "MPU" is basically pointing out that your ADSL stream is encapsulated in an ATM stream. ATM uses fixed size 64 byte packets. You need at least 2 of these, hence the 108 figure for MPU. Now you also need to estimate overhead which is going to be the size of the header on those ATM packets. However, that still leaves the "wasted space" on the end of small packets (eg those that take up 2.5 ATM cells, how much does the 0.5 take up). I suggested a crude way to tweak that patch (easy to see how it works if you look at the relevant lines in the orig file). However, I dont even have a working QOS system so I haven't even compiled it! Look up the specs for ATM though and you should be able to tweak that suggested line change and get something. I for one would be really interested to hear if it solves the problem! Ed W ___ LARTC mailing list / [EMAIL PROTECTED] http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/
[LARTC] Re: Load Balancing 4 cable modems, followed nano.txt
Hello, On Mon, 17 May 2004, Charles-Etienne.Dube wrote: > I did some tests with 2 cable modems, but now it is installed in a production > environment with 4 cable modems. At first, everything seemed to work fine.. > But now I had a couple of users tell me that some web pages were > not available while others were, and it semms to be a masquerading problem > since when I downgraded the system to one cable modem and normal routing > table with only one default gateway everything started working fine again (like > before). I am sure that the non-working URLs were valid sites that were available > at the time (ex: www.yahoo.com; www.google.com; www.hotmail.com). > > > Here are my questions : > > > 1 - Did I use to right patch ? I used "routes-2.4.20-9.diff" It is the right one > I tried to apply "05_nf_reroute-2.4.20-9.diff" Even if I'm used to > patching kernels, I don't understand that much about this process, and > the patch programm was giving error messages, while the one I use was applying > without any error messages. Stick with "routes-*.diff" because it is not enough to apply 05_nf* patch, there are two diffs to apply before it. > 2- Should I use : > > iptables -t nat -A POSTROUTING -o IFE1 -s NWI/NMI -j SNAT --to IPE1 > iptables -t nat -A POSTROUTING -o IFE2 -s NWI/NMI -j SNAT --to IPE2 > > instead of : > > iptables --table nat --append POSTROUTING --out-interface eth1 -j MASQUERADE -j MASQUERADE works for dynamic IPs and is recommended for complex routing situations when using the "routes-*" patches > Can anybody elaborate on the difference between these 2 commands. > > 3 - The 4 cable modems have a different IP, but two of them obtained an > ip in the same address class so they both have the same gateway. In my > script I treated them as if they were each on different gateways and I see > traffic going on the 4 modems so I don't think this is an issue but I'd > like to have some advice about this. What happens if all the IPs > are renewed at the same time and all 4 modems obtain an IP in the same > class with same gateway... will it still work ? If the gateways are same I assume they are on different interfaces. > 4 - do you think 4 cable modems is too much, has it been tested ? I do not remember how many nexthops allows the ip utility for multipath routes but 16 should be possible. > Help with this case would really be appreaciated, I am ready to give > any details that would be necessary for you to guide me further.. ip addr ip rule ip route list table all > Thank You > > Charles-Etienne Dube Regards -- Julian Anastasov <[EMAIL PROTECTED]> ___ LARTC mailing list / [EMAIL PROTECTED] http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/
[LARTC] HTB, MPU, and suitable values
It seems Andreas Klauer's fairnat has experimental support for using HTB's MPU and overhead options. fairnat.config: # Use MPU for HTB. From the LARTC Howto on MPU: # "A zero-sized packet does not use zero bandwidth. For ethernet, no packet # uses less than 64 bytes. The Minimum Packet Unit determines the minimal # token usage for a packet." HTB_MPU=0 # HTB_MPU=64 # Ethernet # HTB_MPU=106 # According to Andy Furniss, this value is suited for DSL users I imagine that 106 value is a reference to this post: http://mailman.ds9a.nl/pipermail/lartc/2004q2/012369.html The patch seems to be available here: http://luxik.cdi.cz/~devik/qos/htb/v3/htb_tc_overhead.diff In any case, I applied the patch to `tc` and recompiled. The resulting binary let me set 'mpu' when using HTB, so I set it to 106 as suggested above. As far as I can tell, nothing changed. Should there be some notable outcome from setting this parameter, as I suspect there should, or should I be using some other value? Was there a HTB component to this patch as well? I patched `tc`, but not HTB in my 2.6.6 kernel. I wasn't able to locate a kernel patch for this, is there one? Here's the actual configuration: tc qdisc add dev eth0 root handle 1: htb default 90 tc class add dev eth0 parent 1: classid 1:1 htb rate 160kbit ceil \ 160kbit mpu 106 tc class add dev eth0 parent 1:1 classid 1:10 htb rate 64kbit ceil \ 64kbit mpu 106 prio 0 tc class add dev eth0 parent 1:1 classid 1:20 htb rate 96kbit ceil \ 160kbit mpu 106 prio 1 tc class add dev eth0 parent 1:1 classid 1:50 htb rate 8kbit ceil \ 160kbit mpu 106 prio 1 tc class add dev eth0 parent 1:1 classid 1:90 htb rate 8kbit ceil \ 160kbit mpu 106 prio 1 tc qdisc add dev eth0 parent 1:10 handle 10: sfq perturb 20 tc qdisc add dev eth0 parent 1:20 handle 20: sfq perturb 20 tc qdisc add dev eth0 parent 1:50 handle 50: sfq perturb 20 tc qdisc add dev eth0 parent 1:90 handle 90: sfq perturb 20 My connection is a ADSL line. When the link is saturated with a large quantity of small UDP packets (~ 100 bytes each) I find the modem begins to queue locally when I use a rate of 190kbit for my parent class. So, I was forced to switch to 160kbit. That seems symptomatic of HTB not knowing the true cost of sending a packet across the ADSL link, which is of essential importance when there are many small packets. It's my suspicion that the MPU and overhead options for HTB would assist in resolving this and enable me to resume using 190kbit instead of 160kbit for the outer most parent class. Is my suspicion correct? Thanks. ___ LARTC mailing list / [EMAIL PROTECTED] http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/