[LARTC] Parent shaping
Hi It's possible if we try to shape the parent class at the parent ceil although total of the child ceil more than parent. Thanks. ___ LARTC mailing list LARTC@mailman.ds9a.nl http://mailman.ds9a.nl/cgi-bin/mailman/listinfo/lartc
Re: [LARTC] Re: multiple routing tables for internal router programs
On Thu, Jun 14, 2007 at 11:50:30AM +0800, Salim S I wrote: > I solved it, thought a bit ugly. > > Have two more rules now in ip ru > > 32150: from all lookup main > 32201: from all fwmark 0x200/0x200 lookup wan1_route > 32202: from all fwmark 0x400/0x400 lookup wan2_route > 32203: from 10.20.0.137 lookup wan1_route > 32204: from 10.2.3.107 lookup wan2_route > 32205: from all lookup catch_all > 32766: from all lookup main > > I did not like to include WAN IP anywhere, coz it may be dynamic, but > well, seems like no choice. ran into the same problem, I capture the link information at ip-up time for ppp/pppoe and dhcp time for cable modem, then I fire off a scrip that pulls down all the ip ru & ip ro and builds it from scratch (as well as the specialised iptables rules as well). This should only happen when I loose a connection so should be okay > > And then two rules in OUTPUT chain > Iptables -t mangle -A OUTPUT -o eth2 -j LB1 > Iptables -t mangle -A OUTPUT -o eth3 -j LB2 > > -Original Message- > From: [EMAIL PROTECTED] > [mailto:[EMAIL PROTECTED] On Behalf Of Salim S I > Sent: Wednesday, June 13, 2007 12:08 PM > To: 'Peter Rabbitson' > Cc: lartc@mailman.ds9a.nl > Subject: RE: [LARTC] Re: multiple routing tables for internal router > programs > > My configuration > > [EMAIL PROTECTED]:~# ip ru > 0: from all lookup local > 32150: from all lookup main > 32201: from all fwmark 0x200/0x200 lookup wan1_route > 32202: from all fwmark 0x400/0x400 lookup wan2_route > 32203: from all lookup catch_all > 32766: from all lookup main > 32767: from all lookup default > > [EMAIL PROTECTED]:~# ip ro li ta main > 192.168.100.0/24 dev eth0 proto kernel scope link src 192.168.100.254 > 10.20.0.0/24 dev eth2 proto kernel scope link src 10.20.0.137 > 192.168.1.0/24 dev eth10 proto kernel scope link src 192.168.1.254 > 10.2.3.0/24 dev eth3 proto kernel scope link src 10.2.3.107 > 127.0.0.0/8 dev lo scope link > > [EMAIL PROTECTED]:~# ip ro li ta wan1_route > default via 10.20.0.1 dev eth2 proto static > [EMAIL PROTECTED]:~# ip ro li ta wan2_route > default via 10.2.3.254 dev eth3 proto static > > [EMAIL PROTECTED]:~# ip ro li ta catch_all > default proto static > nexthop via 10.20.0.1 dev eth2 weight 1 > nexthop via 10.2.3.254 dev eth3 weight 1 > > The catch_all table comes into play only for local packets. All > forwarded packets are marked in mangle PREROUTING, with 0x200 0r 0x400. > > If not loadblancing ping script, there maybe other apps using domain > names instead of IP address, they might still fail, right? > > The problem happens when one of the link goes down (not the nexthop,but > after that). Then the kernel will pick an interface and wrong src IP for > local packets. > > > -Original Message- > From: Peter Rabbitson [mailto:[EMAIL PROTECTED] > Sent: Tuesday, June 12, 2007 7:24 PM > To: Salim S I > Cc: lartc@mailman.ds9a.nl > Subject: Re: [LARTC] Re: multiple routing tables for internal router > programs > > Salim S I wrote: > > Thanks! I get it now. > > But why the src address for the interface is wrong? > > In my case eth2 has a.b.c.d and eth3 has p.q.r.s. > > > > DNS queries going through eth2 has p.q.r.s as src address and those > > going through eth3 has a.b.c.d. Something wrong with routing? > > Possible. Post full configuration and someone might be able to help. > > > I was wondering, how the ping script (to check the lonk status) of > > others work id domain name is used. > > Don't know about others, and I personally use ip addresses :) > > > ___ > LARTC mailing list > LARTC@mailman.ds9a.nl > http://mailman.ds9a.nl/cgi-bin/mailman/listinfo/lartc > > > ___ > LARTC mailing list > LARTC@mailman.ds9a.nl > http://mailman.ds9a.nl/cgi-bin/mailman/listinfo/lartc > signature.asc Description: Digital signature ___ LARTC mailing list LARTC@mailman.ds9a.nl http://mailman.ds9a.nl/cgi-bin/mailman/listinfo/lartc
RE: [LARTC] Re: multiple routing tables for internal router programs
I solved it, thought a bit ugly. Have two more rules now in ip ru 32150: from all lookup main 32201: from all fwmark 0x200/0x200 lookup wan1_route 32202: from all fwmark 0x400/0x400 lookup wan2_route 32203: from 10.20.0.137 lookup wan1_route 32204: from 10.2.3.107 lookup wan2_route 32205: from all lookup catch_all 32766: from all lookup main I did not like to include WAN IP anywhere, coz it may be dynamic, but well, seems like no choice. And then two rules in OUTPUT chain Iptables -t mangle -A OUTPUT -o eth2 -j LB1 Iptables -t mangle -A OUTPUT -o eth3 -j LB2 -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Salim S I Sent: Wednesday, June 13, 2007 12:08 PM To: 'Peter Rabbitson' Cc: lartc@mailman.ds9a.nl Subject: RE: [LARTC] Re: multiple routing tables for internal router programs My configuration [EMAIL PROTECTED]:~# ip ru 0: from all lookup local 32150: from all lookup main 32201: from all fwmark 0x200/0x200 lookup wan1_route 32202: from all fwmark 0x400/0x400 lookup wan2_route 32203: from all lookup catch_all 32766: from all lookup main 32767: from all lookup default [EMAIL PROTECTED]:~# ip ro li ta main 192.168.100.0/24 dev eth0 proto kernel scope link src 192.168.100.254 10.20.0.0/24 dev eth2 proto kernel scope link src 10.20.0.137 192.168.1.0/24 dev eth10 proto kernel scope link src 192.168.1.254 10.2.3.0/24 dev eth3 proto kernel scope link src 10.2.3.107 127.0.0.0/8 dev lo scope link [EMAIL PROTECTED]:~# ip ro li ta wan1_route default via 10.20.0.1 dev eth2 proto static [EMAIL PROTECTED]:~# ip ro li ta wan2_route default via 10.2.3.254 dev eth3 proto static [EMAIL PROTECTED]:~# ip ro li ta catch_all default proto static nexthop via 10.20.0.1 dev eth2 weight 1 nexthop via 10.2.3.254 dev eth3 weight 1 The catch_all table comes into play only for local packets. All forwarded packets are marked in mangle PREROUTING, with 0x200 0r 0x400. If not loadblancing ping script, there maybe other apps using domain names instead of IP address, they might still fail, right? The problem happens when one of the link goes down (not the nexthop,but after that). Then the kernel will pick an interface and wrong src IP for local packets. -Original Message- From: Peter Rabbitson [mailto:[EMAIL PROTECTED] Sent: Tuesday, June 12, 2007 7:24 PM To: Salim S I Cc: lartc@mailman.ds9a.nl Subject: Re: [LARTC] Re: multiple routing tables for internal router programs Salim S I wrote: > Thanks! I get it now. > But why the src address for the interface is wrong? > In my case eth2 has a.b.c.d and eth3 has p.q.r.s. > > DNS queries going through eth2 has p.q.r.s as src address and those > going through eth3 has a.b.c.d. Something wrong with routing? Possible. Post full configuration and someone might be able to help. > I was wondering, how the ping script (to check the lonk status) of > others work id domain name is used. Don't know about others, and I personally use ip addresses :) ___ LARTC mailing list LARTC@mailman.ds9a.nl http://mailman.ds9a.nl/cgi-bin/mailman/listinfo/lartc ___ LARTC mailing list LARTC@mailman.ds9a.nl http://mailman.ds9a.nl/cgi-bin/mailman/listinfo/lartc
Re: [LARTC] shaping using source IP after NAT
Ethy H. Brito написа: On Mon, 11 Jun 2007 22:02:31 +0300 VladSun <[EMAIL PROTECTED]> wrote: TC is performed after POSTROUTING, so you can not do any IP related TC filtering. You can use CPU friendly patches for iptables like IPMARK or IPCLASSIFY. Take a look at them. Ok. Can someone point me the right direction to add IPMARK kernel support? I downloaded patch-o-matic today's snapshot and there is no IPMARK there. I have iptables-1.3.7 and kernel 2.6.21.1 sources (distro is slackware 11.0) The curious thing is that IPMARK is at iptables man page but I got and error when I execute it. It says it could not find /usr/lib/iptables/libipt_IPMARK.so: # locate -i IPMARK # (no output here) Regards. Ethy ___ LARTC mailing list LARTC@mailman.ds9a.nl http://mailman.ds9a.nl/cgi-bin/mailman/listinfo/lartc Try "./runme download" in tge PoM directory. It should work if there is defined download URL for IPMARK in the source.list file in the PoM directory. If it doesn't work try to download older version of PoM. That is because netfilter team has refused to include IPMARK in the official versions some time ago. Regards ___ LARTC mailing list LARTC@mailman.ds9a.nl http://mailman.ds9a.nl/cgi-bin/mailman/listinfo/lartc
Re: [LARTC] shaping using source IP after NAT
On Mon, 11 Jun 2007 22:02:31 +0300 VladSun <[EMAIL PROTECTED]> wrote: > TC is performed after POSTROUTING, so you can not do any IP related TC > filtering. You can use CPU friendly patches for iptables like IPMARK or > IPCLASSIFY. Take a look at them. Ok. Can someone point me the right direction to add IPMARK kernel support? I downloaded patch-o-matic today's snapshot and there is no IPMARK there. I have iptables-1.3.7 and kernel 2.6.21.1 sources (distro is slackware 11.0) The curious thing is that IPMARK is at iptables man page but I got and error when I execute it. It says it could not find /usr/lib/iptables/libipt_IPMARK.so: # locate -i IPMARK # (no output here) Regards. Ethy ___ LARTC mailing list LARTC@mailman.ds9a.nl http://mailman.ds9a.nl/cgi-bin/mailman/listinfo/lartc
[LARTC] HTB deadlock
Greetings, I've been experiencing problems with HTB where the whole machine locks up. This usually happens when the whole qdisc is being removed and occasionally when a leaf is being removed. Common is that it always happens when some sort of removal is in progress. Console output I have captured is at the end of this message. The same behavior exists from vanilla 2.6.19.7 and above. It is possible that the problem also exist in the earlier versions however I did not go further back. I also believe I have found where the actual problem is: qdisc_destroy() function is always called with dev->queue_lock locked. htb_destroy() function up the stack is using del_timer_sync() call to deactivate HTB qdisc timers. >From the comments in the source where del_timer_sync() is defined: ---copy/paste--- /** * del_timer_sync - deactivate a timer and wait for the handler to finish. * @timer: the timer to be deactivated * * This function only differs from del_timer() on SMP: besides deactivating * the timer it also makes sure the handler has finished executing on other * CPUs. * * Synchronization rules: Callers must prevent restarting of the timer, * otherwise this function is meaningless. It must not be called from * interrupt contexts. The caller must not hold locks which would prevent * completion of the timer's handler. The timer's handler must not call * add_timer_on(). Upon exit the timer is not queued and the handler is * not running on any CPU. * * The function returns whether it has deactivated a pending timer or not. */ ---copy/paste--- Now, htb_rate_timer() does exactly what appears to be the source of the problem - it tries obtain dev->queue_lock - and given the right moment (timer fired handler while qdisc_destroy was holding the lock) - system locks up - del_timer_sync is waiting for handler to finish while the handler is waiting for the dev->queue_lock. Of course I could also be completely wrong here and missing something not so obvious. I could also attempt to fix this but I haven't dealt with this code in the past so I was hoping someone with better insight might just have an elegant solution up his sleeve. Best regards, Ranko PS: If this list is not the right place for this report - please let me know. ---CONSOLE (2.6.19.7)--- BUG: soft lockup detected on CPU#3! [] softlockup_tick+0x93/0xc2 [] update_process_times+0x26/0x5c [] smp_apic_timer_interrupt+0x97/0xb2 [] apic_timer_interrupt+0x1f/0x24 [] klist_next+0x4/0x8a [] _spin_unlock_irqrestore+0xa/0xc [] try_to_del_timer_sync+0x47/0x4f [] del_timer_sync+0xe/0x14 [] htb_destroy+0x20/0x7b [sch_htb] [] qdisc_destroy+0x44/0x8d [] htb_destroy_class+0xd0/0x12d [sch_htb] [] htb_destroy_class+0x52/0x12d [sch_htb] [] htb_destroy+0x3f/0x7b [sch_htb] [] qdisc_destroy+0x44/0x8d [] htb_destroy_class+0xd0/0x12d [sch_htb] [] htb_destroy_class+0x52/0x12d [sch_htb] [] htb_destroy+0x3f/0x7b [sch_htb] [] qdisc_destroy+0x44/0x8d [] tc_get_qdisc+0x1a3/0x1ef [] tc_get_qdisc+0x0/0x1ef [] rtnetlink_rcv_msg+0x158/0x215 [] rtnetlink_rcv_msg+0x0/0x215 [] netlink_run_queue+0x88/0x11d [] rtnetlink_rcv+0x26/0x42 [] netlink_data_ready+0x12/0x54 [] netlink_sendskb+0x1c/0x33 [] netlink_sendmsg+0x1ee/0x2d7 [] sock_sendmsg+0xe5/0x100 [] autoremove_wake_function+0x0/0x37 [] autoremove_wake_function+0x0/0x37 [] sock_sendmsg+0xe5/0x100 [] copy_from_user+0x33/0x69 [] sys_sendmsg+0x12d/0x243 [] _read_unlock_irq+0x5/0x7 [] find_get_page+0x37/0x42 [] filemap_nopage+0x30c/0x3a3 [] __handle_mm_fault+0x21c/0x943 [] _spin_unlock_bh+0x5/0xd [] sock_setsockopt+0x63/0x59d [] anon_vma_prepare+0x1b/0xcb [] sys_socketcall+0x24f/0x271 [] do_page_fault+0x0/0x600 [] sysenter_past_esp+0x56/0x79 === BUG: soft lockup detected on CPU#1! [] softlockup_tick+0x93/0xc2 [] update_process_times+0x26/0x5c [] smp_apic_timer_interrupt+0x97/0xb2 [] apic_timer_interrupt+0x1f/0x24 [] blk_do_ordered+0x70/0x27e [] _raw_spin_lock+0xaa/0x13e [] htb_rate_timer+0x18/0xc4 [sch_htb] [] run_timer_softirq+0x163/0x189 [] htb_rate_timer+0x0/0xc4 [sch_htb] [] __do_softirq+0x70/0xdb [] do_softirq+0x3b/0x42 [] smp_apic_timer_interrupt+0x9c/0xb2 [] apic_timer_interrupt+0x1f/0x24 [] mwait_idle_with_hints+0x3b/0x3f [] mwait_idle+0xc/0x1b [] cpu_idle+0x63/0x79 === BUG: soft lockup detected on CPU#2! [] softlockup_tick+0x93/0xc2 [] update_process_times+0x26/0x5c [] smp_apic_timer_interrupt+0x97/0xb2 [] apic_timer_interrupt+0x1f/0x24 [] blk_do_ordered+0x70/0x27e [] _raw_spin_lock+0xaa/0x13e [] dev_queue_xmit+0x53/0x2e4 [] neigh_connected_output+0x80/0xa0 [] ip_output+0x1b5/0x24b [] ip_finish_output+0x0/0x192 [] ip_forward+0x1c8/0x2b9 [] ip_forward_finish+0x0/0x37 [] ip_rcv+0x2a5/0x538 [] ip_rcv_finish+0x0/0x2aa [] __netdev_alloc_skb+0x12/0x2a [] ip_rcv+0x0/0x538 [] netif_receive_skb+0x218/0x318 [] bitmap_get_counter+0x41/0x1e6 [] e1000_clean_rx_irq+0x12c/0x4ef [e1000] [] e1000_cle