Re: [LARTC] IMQ stability
Hi Damion, the original IMQ implementation is under development by a group of people working at www.linuximq.net . There you'll find patchs for the lastest kernels and iptables, a simple FAQ and a mailling list. IMQ is being used by a lot of people in diferent environments. Some of then, like mine, are production with a large bandwidth being shaped. There are some known problem and we've being working on then. Two known issues are: some hangs trying to shape locally generated traffic and some rmmod problems on 2.6. The 2.6 version of IMQ is just a port of the original 2.4 version. I would like to invite you to join our mailling list and give IMQ a try. Right now it is more stable then people have being saying around and there are some guys working on it, releasing patchs for latest kernels and iptables and looking to its future. Good luck. Andre Damion de Soto wrote: Hi, I've never actually even tried to use the IMQ device before, but I've watched the emails go back and forth on various problems associated with it, and what looks like some general instability. How stable is it really ? Is it suitable for full-time use on a large number of routers ? Has anyone used it on ipsec0 + eth0 devices for shaping ? and lastly, any difference between the IMQ implementation on 2.4, and that on 2.6 ? Are they both still being developed ? thanks, ___ LARTC mailing list / [EMAIL PROTECTED] http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/
Re: [LARTC] IMQ Stability
On Fri, Jan 23, 2004 at 10:29:13AM -0700, Michael S. Kazmier wrote: MSK>Hello all, MSK>I have been doing a lot of archive searching over the last week reading MSK>posts on IMQ and it's apparent stability / instability. I have seen a MSK>number of posts about it not being maintained as well. Can anyone talk to MSK>me about IMQ's stability in a heavy throughput environment (20 Mbps) and MSK>what was causing IMQ to fail if you know. I use it and it's work OK for me Traffic at some router up to 30-40 Mbit IMQ has one trouble Don't assing address to imq interface becase kernel crash it you do this. -- Best regard, Aleksander Trotsai aka MAGE-RIPE aka MAGE-UANIC My PGP key at ftp://blackhole.adamant.ua/pgp/trotsai.key[.asc] Big trouble - ..disk or the processor is on fire. ___ LARTC mailing list / [EMAIL PROTECTED] http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/
Re: [LARTC] IMQ Stability
On Jan 26, Michael S. Kazmier wrote: > Hello Alex, > > Perhaps I missed something below which ties eth0 and eth1 to the PPP pipe, > or its just my unfamiliarity with PPP. > sorry I should of made it cleaner. If you read up on Advanced Routing HOWTO, its hopefully easy to understand. lets say: [EMAIL PROTECTED]:~$ cat /etc/iproute2/rt_tables # # reserved values # 255 local 254 main 253 default 0 unspec # # local # 1 inr.ruhep # inskipp 32 ppp-upstream 33 ppp-downstream you then type (something along the lines of): - ip route add default dev ppp1 table ppp-upstream ip route add default dev ppp0 table ppp-downstream ip rule add from 10.0.0.0/8 iif eth1 table ppp-upstream ip rule add to 10.0.0.0/8 iif eth0 table ppp-downstream ip route flush cache - In summary, this setups linux to do exactly what is in the diagram (below). The nice thing is after the above is setup you treat it as if its a physical interface, its a real ppp session. Any traffic that goes into ppp0 appears on ppp1 and vice versa; treat it like a fancy wormhole :) The advantage here over the IMQ-ng that is being made, from what I uderstand, is here the only patch you need is to bypass connection tracking on the Internet bound traffic from eth1 (for techie reasons), when it 'appears' from ppp1 then the connection tracking should be allowed to continue. This is where the RAW netfilter patch comes into play. Although you are swapping one kernel patch for another, the RAW one looks like its going to be around much longer and actually maintained, the other very important fact is that you can now (if you think about it, I will leave it as an exercise for you) use it to simulate those IP-Aliasing interfaces and actually now shape on that basis per pipe. The clue is true _source_ based routing ;) > Regardless, an interesting methodology. Do you think you could do the > following: > > -- > > The reason I ask is that I would like to, at the PPP level, apply CBQ or HTB > rate shaping to my each end user (ie, limit traffic to 256K or something > like that). And then, after each customer has their rate shaping, at the > ETH level I would like to priorize traffic (ie, all www prio 3, ssh - > telnet, prio 1, ftp prio 4, everything else prio 7) > > Thoughts? > in theory I guess you could setup a linu bridge over the ppp-pipe, however there is no point (from what I can see) as you are NATing, so the box is the default gateway for the other machines, plus more importantly, if you want a bridge why not just forget about the ppp-pipe and bridge over eth0<->eth1. This is what my jdg-qos-script[1] from more or less day one. Anyway, feedback would be great on the above idea. Regards Alex [1] http://www.digriz.org.uk/jdg-qos-script/ -- _ / Genius is pain. \ | | \ -- John Lennon / - \ ^__^ \ (oo)\___ (__)\ )\/\ ||w | || || signature.asc Description: Digital signature
Re: [LARTC] IMQ Stability
> the new imq driver that i am developing will have unlimited posibilities > it willbe fake interface wich passes all ip trafic without exception no > mater which direction, destination and so on > even localy generated and received trafic should pass it May I suggest that if it's new code with new approach it should get a different name ? Rubens ___ LARTC mailing list / [EMAIL PROTECTED] http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/
RE: [LARTC] IMQ Stability
Hello Alex, Perhaps I missed something below which ties eth0 and eth1 to the PPP pipe, or its just my unfamiliarity with PPP. Regardless, an interesting methodology. Do you think you could do the following: -- The reason I ask is that I would like to, at the PPP level, apply CBQ or HTB rate shaping to my each end user (ie, limit traffic to 256K or something like that). And then, after each customer has their rate shaping, at the ETH level I would like to priorize traffic (ie, all www prio 3, ssh - telnet, prio 1, ftp prio 4, everything else prio 7) Thoughts? -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Alexander Clouter Sent: Saturday, January 24, 2004 7:05 PM To: [EMAIL PROTECTED] Subject: Re: [LARTC] IMQ Stability On Jan 24, [EMAIL PROTECTED] wrote: > Thank you for the detailed discussion. There is no doubt that there is a > need for an IMQ type device/funtionality. What would work really great, > IMHO, is a "fake" or psuedo ethernet driver that simply sits as a shim > between one or more real drivers. This fake device could allow us to > "Stack" qdiscs in a way to allow one to shape traffic in multiple > "policies" - ie, prioritize traffic AND allocate / rate shape end users. > I have actually thought of utilizing the kernel bonding driver for this - > attaching only a single slave to it - but haven't had time as yet. Not > sure that this would do anything for ingress shaping though. > I have been working on this with using what I call a ppp-pipe. The result is Internet (eth0) <-> ppp0 - ppp1 <-> LAN (eth1) 10.0.0.0/8 where ppp0ppp1 is on the local machine (and simulates two NICs with a crossover cable between them in the same machine). What you throw in at ppp0 appears at ppp1 and vice versa. This works fine, it also means you can shape on the ppp0/ppp1 interfaces and leave all the NAT stuff on the real interfaces. The command to create this ppp-pipe is (as root), so far I am not completely sure if you need to add to the first pppd command ":" for its parameters (you might also need 'xonxoff' too in both): # mkfifo /tmp/ppp-pipe # pppd noauth nodefaultroute notty < /tmp/ppp-pipe | pppd noauth \ notty > /tmp/ppp-pipe However there is a major problem..connection tracking. In the above setup you do iptables -t nat -I POSTROUTING -s 10.0.0.0/8 \ -d ! 10.0.0.0/8 -o eth0 -j MASQUERADE the '-o eth0' is very important, you also create some advance routing bits to make all traffic crossing the router to pass through the ppp-pipe; easy enough, but depends on your needs. Conntrack unfortunately notices that you did not want to NAT the packet straight away when it arrives on eth1 (if you do then you will be unable to shape fairly per IP, for example with ESFQ), but then later on when the packet resurfaces at ppp0 the 'nat' table is skipped. The only way about this is to use the patch-o-matic RAW patch and instruct it to skip connection tracking for packets on eth1 destined for the Internet. As I am now pure 2.6.x goodness I am in the middle of porting the patch myself (the patch-o-matic-ng does not work for me, could be me being lame though). Sure this is replacing one patch dependency with another, however IMQ really seems that it has been left out to rot; whilst the RAW patch probably is going to stay better maintained, hell its in the patch-o-matic for starters. Besides there are lots of advantages with the ppp-pipe, as now all you folks who want to shape over with IP-Aliasing can just use cunning ppp-pipes instead; whilst still keeping things very simple. So far the above should work in non-NAT (or rather connection tracking) setups but where you want the equilivent of IP-Aliased style shaping. Anyway thoughts would be apprieated, however when I was on #lartc it was its normal dead self so I was left dead in the water myself :( have fun Alex -- ___ < Fortune favors the lucky. > --- \ ^__^ \ (oo)\___ (__)\ )\/\ ||w | || || ___ LARTC mailing list / [EMAIL PROTECTED] http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/
Re: [LARTC] IMQ Stability
Hi Roy, Excelent Roy!!! Good job. Where we can get your IMQ port to test? Best Regards Remus - Original Message - From: "Roy" <[EMAIL PROTECTED]> To: <[EMAIL PROTECTED]> Sent: Sunday, January 25, 2004 3:49 AM Subject: Re: [LARTC] IMQ Stability > Internet (eth0) <-> ppp0 - ppp1 <-> LAN (eth1) 10.0.0.0/8 > > > this way dont seem excelent because it still lacks some functionality > and what about using LO or dummy type interface instead of ppp? > > the new imq driver that i am developing will have unlimited posibilities > it willbe fake interface wich passes all ip trafic without exception no > mater which direction, destination and so on > even localy generated and received trafic should pass it > I removed iptables module so noo need to configure it just everything is > catched. > so you will be able to shape in + out in one > > also I am thinking about the chaining functionality > is there any need to make chain of imq devices ? ( they will get the all > same trafic) > you will be able to use few shapers then but it will add latency. > > I almost finished my driver , but unfortunately there is no way to avoid > patching kernel. > > I need to export ip_finish_output2 and ip_local_deliver_finish functions but > dont know how to do that, and where is the best place. > > > > > ___ > LARTC mailing list / [EMAIL PROTECTED] > http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/ > ___ LARTC mailing list / [EMAIL PROTECTED] http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/
Re: [LARTC] IMQ Stability
Hi Roy, This is great news! Shaping in+out at once is not always wanted... Usually you want to shape them seperately because each direction has a different bandwidth and limits. So I think it should be optional (i.e. you should be able to configure if you want the ingress and/or the egress side). Your efforts are highly appreciated! Aron --- From: "Roy" <[EMAIL PROTECTED]> To: <[EMAIL PROTECTED]> Subject: Re: [LARTC] IMQ Stability Date: Sun, 25 Jan 2004 05:49:15 +0200 Internet (eth0) <-> ppp0 - ppp1 <-> LAN (eth1) 10.0.0.0/8 this way dont seem excelent because it still lacks some functionality and what about using LO or dummy type interface instead of ppp? the new imq driver that i am developing will have unlimited posibilities it willbe fake interface wich passes all ip trafic without exception no mater which direction, destination and so on even localy generated and received trafic should pass it I removed iptables module so noo need to configure it just everything is catched. so you will be able to shape in + out in one also I am thinking about the chaining functionality is there any need to make chain of imq devices ? ( they will get the all same trafic) you will be able to use few shapers then but it will add latency. I almost finished my driver , but unfortunately there is no way to avoid patching kernel. I need to export ip_finish_output2 and ip_local_deliver_finish functions but dont know how to do that, and where is the best place. ___ LARTC mailing list / [EMAIL PROTECTED] http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/
Re: [LARTC] IMQ Stability
Internet (eth0) <-> ppp0 - ppp1 <-> LAN (eth1) 10.0.0.0/8 this way dont seem excelent because it still lacks some functionality and what about using LO or dummy type interface instead of ppp? the new imq driver that i am developing will have unlimited posibilities it willbe fake interface wich passes all ip trafic without exception no mater which direction, destination and so on even localy generated and received trafic should pass it I removed iptables module so noo need to configure it just everything is catched. so you will be able to shape in + out in one also I am thinking about the chaining functionality is there any need to make chain of imq devices ? ( they will get the all same trafic) you will be able to use few shapers then but it will add latency. I almost finished my driver , but unfortunately there is no way to avoid patching kernel. I need to export ip_finish_output2 and ip_local_deliver_finish functions but dont know how to do that, and where is the best place. ___ LARTC mailing list / [EMAIL PROTECTED] http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/
Re: [LARTC] IMQ Stability
On Jan 24, [EMAIL PROTECTED] wrote: > Thank you for the detailed discussion. There is no doubt that there is a > need for an IMQ type device/funtionality. What would work really great, > IMHO, is a "fake" or psuedo ethernet driver that simply sits as a shim > between one or more real drivers. This fake device could allow us to > "Stack" qdiscs in a way to allow one to shape traffic in multiple > "policies" - ie, prioritize traffic AND allocate / rate shape end users. > I have actually thought of utilizing the kernel bonding driver for this - > attaching only a single slave to it - but haven't had time as yet. Not > sure that this would do anything for ingress shaping though. > I have been working on this with using what I call a ppp-pipe. The result is Internet (eth0) <-> ppp0 - ppp1 <-> LAN (eth1) 10.0.0.0/8 where ppp0ppp1 is on the local machine (and simulates two NICs with a crossover cable between them in the same machine). What you throw in at ppp0 appears at ppp1 and vice versa. This works fine, it also means you can shape on the ppp0/ppp1 interfaces and leave all the NAT stuff on the real interfaces. The command to create this ppp-pipe is (as root), so far I am not completely sure if you need to add to the first pppd command ":" for its parameters (you might also need 'xonxoff' too in both): # mkfifo /tmp/ppp-pipe # pppd noauth nodefaultroute notty < /tmp/ppp-pipe | pppd noauth \ notty > /tmp/ppp-pipe However there is a major problem..connection tracking. In the above setup you do iptables -t nat -I POSTROUTING -s 10.0.0.0/8 \ -d ! 10.0.0.0/8 -o eth0 -j MASQUERADE the '-o eth0' is very important, you also create some advance routing bits to make all traffic crossing the router to pass through the ppp-pipe; easy enough, but depends on your needs. Conntrack unfortunately notices that you did not want to NAT the packet straight away when it arrives on eth1 (if you do then you will be unable to shape fairly per IP, for example with ESFQ), but then later on when the packet resurfaces at ppp0 the 'nat' table is skipped. The only way about this is to use the patch-o-matic RAW patch and instruct it to skip connection tracking for packets on eth1 destined for the Internet. As I am now pure 2.6.x goodness I am in the middle of porting the patch myself (the patch-o-matic-ng does not work for me, could be me being lame though). Sure this is replacing one patch dependency with another, however IMQ really seems that it has been left out to rot; whilst the RAW patch probably is going to stay better maintained, hell its in the patch-o-matic for starters. Besides there are lots of advantages with the ppp-pipe, as now all you folks who want to shape over with IP-Aliasing can just use cunning ppp-pipes instead; whilst still keeping things very simple. So far the above should work in non-NAT (or rather connection tracking) setups but where you want the equilivent of IP-Aliased style shaping. Anyway thoughts would be apprieated, however when I was on #lartc it was its normal dead self so I was left dead in the water myself :( have fun Alex -- ___ < Fortune favors the lucky. > --- \ ^__^ \ (oo)\___ (__)\ )\/\ ||w | || || signature.asc Description: Digital signature
Re: [LARTC] IMQ Stability
Probably I am going to continue imq development, so I know about it something. IMQ is very unpredictable you can use it all week or it may crash at once. and what is the most strange - crashes osccur everywhere in the kernel except in the driver itself this can be kernel bug as well. under high loag it crashes quite soom while in low load it can hold forewer this probably depends on cpu speed and looks that it tends to crash if you try to shape localy generated trafic if you use it for ingress only it wont have much problems. I have no hope to make it work, I rewrote the code completely few times and no use probably this way just cant work. I am going to use completely other way to do the same job. imq is trying to use userspace queue which dont like when packets are droped and seems there is no way to avoid droping while doing trafic shaping, so I will use another way by completely removing packets from iptables at some place and transmitting them directly where needed. thus replacing part of kernel code. this way I will be able at least to track the bug. P.S. iptables have another similar module ( ROUTE target ) i tryed it and it works in some cases ( i redirect trafic to lo interface) but not very good. - Original Message - From: "Michael S. Kazmier" <[EMAIL PROTECTED]> To: <[EMAIL PROTECTED]> Sent: Friday, January 23, 2004 7:29 PM Subject: [LARTC] IMQ Stability > Hello all, > > I have been doing a lot of archive searching over the last week reading > posts on IMQ and it's apparent stability / instability. I have seen a > number of posts about it not being maintained as well. Can anyone talk to > me about IMQ's stability in a heavy throughput environment (20 Mbps) and > what was causing IMQ to fail if you know. > > Thanks, > > Mike > > ___ > LARTC mailing list / [EMAIL PROTECTED] > http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/ > ___ LARTC mailing list / [EMAIL PROTECTED] http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/
Re: [LARTC] IMQ Stability
Thank you for the detailed discussion. There is no doubt that there is a need for an IMQ type device/funtionality. What would work really great, IMHO, is a "fake" or psuedo ethernet driver that simply sits as a shim between one or more real drivers. This fake device could allow us to "Stack" qdiscs in a way to allow one to shape traffic in multiple "policies" - ie, prioritize traffic AND allocate / rate shape end users. I have actually thought of utilizing the kernel bonding driver for this - attaching only a single slave to it - but haven't had time as yet. Not sure that this would do anything for ingress shaping though. Thanks again... Mike > Probably I am going to continue imq development, so I know about it > something. > > IMQ is very unpredictable you can use it all week or it may crash at once. > and what is the most strange - crashes osccur everywhere in the kernel > except in the driver itself > this can be kernel bug as well. > under high loag it crashes quite soom while in low load it can hold > forewer > this probably depends on cpu speed and looks that it tends to crash if you > try to shape localy generated trafic > if you use it for ingress only it wont have much problems. > > I have no hope to make it work, I rewrote the code completely few times > and > no use > probably this way just cant work. > > I am going to use completely other way to do the same job. > imq is trying to use userspace queue which dont like when packets are > droped > and seems there is no way to avoid droping > while doing trafic shaping, so I will use another way by completely > removing > packets from iptables at some place and transmitting them directly where > needed. > thus replacing part of kernel code. > this way I will be able at least to track the bug. > > P.S. iptables have another similar module ( ROUTE target ) i tryed it and > it > works in some cases ( i redirect trafic to lo interface) but not very > good. > > > - Original Message - > From: "Michael S. Kazmier" <[EMAIL PROTECTED]> > To: <[EMAIL PROTECTED]> > Sent: Friday, January 23, 2004 7:29 PM > Subject: [LARTC] IMQ Stability > > >> Hello all, >> >> I have been doing a lot of archive searching over the last week reading >> posts on IMQ and it's apparent stability / instability. I have seen a >> number of posts about it not being maintained as well. Can anyone talk >> to >> me about IMQ's stability in a heavy throughput environment (20 Mbps) and >> what was causing IMQ to fail if you know. >> >> Thanks, >> >> Mike >> >> ___ >> LARTC mailing list / [EMAIL PROTECTED] >> http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/ >> > > ___ LARTC mailing list / [EMAIL PROTECTED] http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/