I am using lxc tools (0.7.4-0ubuntu7.1) on a ubuntu natty kernel 2.6.38-11-server, running several lucid containers with iptables rules inside. Sometimes (quite difficult to reproduce), stopping the container make the host kernel crashing :

# lxc-stop -n lucid
[system hangs]

Oct 14 16:12:07 lab2 kernel: [ 1629.627196] br0: port 2(vethlucid1) entering forwarding state Oct 14 16:12:07 lab2 kernel: [ 1629.781408] br0: port 2(vethlucid1) entering disabled state Oct 14 16:12:09 lab2 kernel: [ 1629.839799] ------------[ cut here ]------------ Oct 14 16:12:09 lab2 kernel: [ 1629.840899] kernel BUG at /build/buildd/linux-2.6.38/net/netfilter/xt_recent.c:610!
Oct 14 16:12:09 lab2 kernel: [ 1629.873678] invalid opcode: 0000 [#1] SMP
Oct 14 16:12:09 lab2 kernel: [ 1629.905346] last sysfs file: /sys/devices/system/cpu/cpu7/cache/index2/shared_cpu_map
Oct 14 16:12:09 lab2 kernel: [ 1629.969152] CPU 7
Oct 14 16:12:09 lab2 kernel: [ 1629.969615] Modules linked in: xt_multiport xt_recent ipt_LOG xt_limit xt_state xt_tcpudp iptable_mangle iptable_nat nf_nat nf_conntrack_ipv4 nf_conntrack nf_defrag_ipv4 iptable_raw iptable_filter ip_tables x_tables veth mptctl vesafb bridge stp lp i7core_edac ghes edac_core hed psmouse ioatdma serio_raw joydev parport dca raid10 raid456 async_pq async_xor xor async_memcpy async_raid6_recov usbhid hid mptsas mptscsih ahci mptbase libahci raid6_pq async_tx scsi_transport_sas raid1 raid0 multipath e1000e linear btrfs floppy zlib_deflate libcrc32c
Oct 14 16:12:09 lab2 kernel: [ 1630.240144]
Oct 14 16:12:09 lab2 kernel: [ 1630.275030] Pid: 230, comm: kworker/u:5 Not tainted 2.6.38-11-server #50-Ubuntu Supermicro X8STi/X8STi Oct 14 16:12:09 lab2 kernel: [ 1630.347107] RIP: 0010:[<ffffffffa02e98dd>] [<ffffffffa02e98dd>] recent_net_exit+0x3d/0x40 [xt_recent] Oct 14 16:12:09 lab2 kernel: [ 1630.421399] RSP: 0018:ffff8805eb1bfda0 EFLAGS: 00010202 Oct 14 16:12:09 lab2 kernel: [ 1630.459109] RAX: ffff8805ed667c20 RBX: ffffffffa02ec038 RCX: 0000000000000000 Oct 14 16:12:09 lab2 kernel: [ 1630.497969] RDX: ffff8805eeb50f00 RSI: ffffffffa02ec040 RDI: ffff8805edfa8a00 Oct 14 16:12:09 lab2 kernel: [ 1630.536978] RBP: ffff8805eb1bfda0 R08: 00007a80fffffff8 R09: fffffff8fffffff8 Oct 14 16:12:09 lab2 kernel: [ 1630.576091] R10: fffffff8fffffff8 R11: 00007a80fffffff8 R12: ffffffffa02ec040 Oct 14 16:12:09 lab2 kernel: [ 1630.614431] R13: ffff8805edfa8a00 R14: ffff8805eb1bfde0 R15: ffffffff814ddf80 Oct 14 16:12:09 lab2 kernel: [ 1630.652751] FS: 0000000000000000(0000) GS:ffff8800df4e0000(0000) knlGS:0000000000000000 Oct 14 16:12:09 lab2 kernel: [ 1630.729573] CS: 0010 DS: 0000 ES: 0000 CR0: 000000008005003b Oct 14 16:12:09 lab2 kernel: [ 1630.769127] CR2: 00007f2fd5e7be3c CR3: 0000000001a03000 CR4: 00000000000006e0 Oct 14 16:12:09 lab2 kernel: [ 1630.809478] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 Oct 14 16:12:09 lab2 kernel: [ 1630.848909] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400 Oct 14 16:12:09 lab2 kernel: [ 1630.887002] Process kworker/u:5 (pid: 230, threadinfo ffff8805eb1be000, task ffff8805eb6044a0)
Oct 14 16:12:09 lab2 kernel: [ 1630.962073] Stack:

For information, i have iptables rules running inside my containers and they use the netfilter recent module that seems playing a role in the lernel panic. I will try disabling recent rules inside containers to see if the problem disappears
If someone have an idea how to fix it.

Another question, anybody implement iptables inside containers and could give me advices configuring LOG chain or rsyslog inside container to isolate iptables kernel log message from the hypervisor ?

Regards
Tony OGER -- LibrA-LinuX
------------------------------------------------------------------------------
All the data continuously generated in your IT infrastructure contains a
definitive record of customers, application performance, security
threats, fraudulent activity and more. Splunk takes this data and makes
sense of it. Business sense. IT sense. Common sense.
http://p.sf.net/sfu/splunk-d2d-oct
_______________________________________________
Lxc-users mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/lxc-users

Reply via email to