Your message dated Tue, 4 Jun 2013 00:29:54 +0200
with message-id <20130603222954.GD5604@pisco.westfalen.local>
and subject line Closing
has caused the Debian Bug report #516376,
regarding nfs problem?
to be marked as done.

This means that you claim that the problem has been dealt with.
If this is not the case it is now your responsibility to reopen the
Bug report if necessary, and/or fix the problem forthwith.

(NB: If you are a system administrator and have no idea what this
message is talking about, this may indicate a serious mail system
misconfiguration somewhere. Please contact ow...@bugs.debian.org
immediately.)


-- 
516376: http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=516376
Debian Bug Tracking System
Contact ow...@bugs.debian.org with problems
--- Begin Message ---
Package: linux-image-2.6.26-2-amd64
Version: 2.6.26-15lenny3

I am doing a stress test of my HA fileserver based upon
Lenny on 2 hosts. It uses drbd (protocol C), nfs-kernel-server
and heartbeat. The exported partition is formatted in XFS.
/etc/exports is

        /data 172.22.96.0/23(rw,async,no_root_squash,no_subtree_check)

/var/lib/nfs is on a cluster file system shared between both hosts,
too. /etc/init.d/nfs-kernel-server had to be patched to make sure the
nfs daemons terminate (see http://bugs.debian.org/532276).

During the test I run on host1

        while true; do \
                /etc/init.d/heartbeat stop; \
                sleep 120; \
                /etc/init.d/heartbeat start; \
                sleep 120; \
                ssh host2 /etc/init.d/heartbeat stop; \
                sleep 120; \
                ssh host2 /etc/init.d/heartbeat start; \
                sleep 120; \
        done

The NFS partition was mounted on a Linux client (Opensuse). On
this host I ran

        cd /mnt
        for i in $(seq -w 0 99); do \
                date; \
                mkdir -p $i; \
                pushd $i; \
                tar xpf /var/tmp/linux-2.6.29.4.tar.bz2; \
                popd; \
        done
        sync
        date

Of course both loops were running in parallel. Host1 showed
after some time:

[ 8083.250212] ------------[ cut here ]------------
[ 8083.273156] kernel BUG at include/linux/module.h:386!
[ 8083.293976] invalid opcode: 0000 [1] SMP
[ 8083.293976] CPU 1
[ 8083.293976] Modules linked in: nfsd auth_rpcgss exportfs nfs lockd nfs_acl 
sunrpc xfs sha256_generic drbd cn battery bonding loop snd_pcm snd_timer snd 
soundcore snd_page_alloc psmouse serio_raw pcspkr k8temp i2c_piix4 i2c_core 
shpchp button pci_hotplug joydev evdev reiserfs usbhid hid ff_memless raid10 
usb_storage raid456 async_xor async_memcpy async_tx xor raid1 raid0 multipath 
linear md_mod ide_pci_generic sd_mod serverworks ide_core ehci_hcd ohci_hcd 
ata_generic sata_svw libata scsi_mod e1000 dock thermal processor fan 
thermal_sys
[ 8083.477886] Pid: 11547, comm: mount Not tainted 2.6.26-2-amd64 #1
[ 8083.477886] RIP: 0010:[<ffffffff802ae91d>]  [<ffffffff802ae91d>] 
get_filesystem+0x16/0x32
[ 8083.477886] RSP: 0018:ffff81005247bd08  EFLAGS: 00010246
[ 8083.477886] RAX: 0000000000000000 RBX: ffffffffa032cd00 RCX: ffffffffa032c810
[ 8083.477886] RDX: 0000000000001000 RSI: ffffffffa0319168 RDI: ffffffffa032cd00
[ 8083.477886] RBP: ffff81007e899800 R08: ffff81007e899a68 R09: ffff81005247bc58
[ 8083.477886] R10: ffff8100748ed000 R11: ffffffff802f1fa2 R12: 0000000000000000
[ 8083.477886] R13: ffffffffa032c7e0 R14: 0000000000000000 R15: ffffffff8029c130
[ 8083.477886] FS:  00007ffeefcf87c0(0000) GS:ffff81007fb799c0(0000) 
knlGS:0000000000000000
[ 8083.477886] CS:  0010 DS: 0000 ES: 0000 CR0: 000000008005003b
[ 8083.477886] CR2: 000000000071c000 CR3: 000000007e0c4000 CR4: 00000000000006e0
[ 8083.477886] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[ 8083.477886] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
[ 8083.477886] Process mount (pid: 11547, threadinfo ffff81005247a000, task 
ffff81007f11cb50)
[ 8083.477886] Stack:  ffff81007e899800 ffffffff8029c6d3 ffffffff8029cff2 
ffff81007e15a780
[ 8083.477886]  0000000000000000 0000000000000000 0000000000000000 
ffffffffa031276a
[ 8083.477886]  ffff81007e15a780 ffffffff8029d2a0 ffff81007e15a780 
ffff81007e15a780
[ 8083.477886] Call Trace:
[ 8083.477886]  [<ffffffff8029c6d3>] ? sget+0x37b/0x38a
[ 8083.477886]  [<ffffffff8029cff2>] ? set_anon_super+0x0/0xa3
[ 8083.477886]  [<ffffffffa031276a>] ? :sunrpc:rpc_fill_super+0x0/0x9b
[ 8083.477886]  [<ffffffff8029d2a0>] ? get_sb_single+0x2f/0xb3
[ 8083.477886]  [<ffffffff8029cbb8>] ? vfs_kern_mount+0x93/0x11b
[ 8083.477886]  [<ffffffff8029cc93>] ? do_kern_mount+0x43/0xdc
[ 8083.477886]  [<ffffffff802b16a5>] ? do_new_mount+0x5b/0x95
[ 8083.477886]  [<ffffffff802b189c>] ? do_mount+0x1bd/0x1e7
[ 8083.477886]  [<ffffffff80276894>] ? __alloc_pages_internal+0xd6/0x3bf
[ 8083.477886]  [<ffffffff802b1950>] ? sys_mount+0x8a/0xce
[ 8083.477886]  [<ffffffff8020beca>] ? system_call_after_swapgs+0x8a/0x8f
[ 8083.477886]
[ 8083.477886]
[ 8083.477886] Code: 8a 76 25 00 eb 05 bb ea ff ff ff 59 5e 48 63 c3 5b 5d 41 
5c c3 53 48 8b 5f 20 48 85 db 74 26 48 89 df e8 f6 39 fa ff 85 c0 75 04 <0f> 0b 
eb fe 65 8b 04 25 24 00 00 00 89 c0 48 c1 e0 07 48 ff 84
[ 8083.477886] RIP  [<ffffffff802ae91d>] get_filesystem+0x16/0x32
[ 8083.477886]  RSP <ffff81005247bd08>
[ 8084.545943] Clocksource tsc unstable (delta = 4398045366700 ns)
[ 8084.617881] ---[ end trace e0b11d1b0dd348cc ]---



I saw this problem several times.


Regards

Harri



--- End Message ---
--- Begin Message ---
Hi,
your bug has been filed against the "linux-2.6" source package and was filed for
a kernel older than the recently released Debian 7.0 / Wheezy with a severity
less than important.

We don't have the ressources to reproduce the complete backlog of all older 
kernel
bugs, so we're closing this bug for now. If you can reproduce the bug with 
Debian Wheezy
or a more recent kernel from testing or unstable, please reopen the bug by 
sending
a mail to cont...@bugs.debian.org with the following three commands included in 
the
mail:

reopen BUGNUMBER
reassign BUGNUMBER src:linux
thanks

Cheers,
        Moritz

--- End Message ---

Reply via email to