Good idea - but i check something else :)
i do:
mkfs.ocfs2 -N 2 -L MAIL --fs-feature-level=max-features /dev/dm-0

and i mount only on 1 server and do read/write/delete and we will see what 
happened :-)



-----Oryginalna wiadomość----- 
From: Eduardo Diaz - Gmail
Sent: Monday, December 19, 2011 10:40 AM
To: Marek Królikowski
Cc: ocfs2-users@oss.oracle.com
Subject: Re: [Ocfs2-users] ocfs2 - Kernel panic on many write/read from 
bothservers

If this is not running, may be a problem with the share access...

Do you try create other filesystem try xfs and recreate the probe
(only with one node)..

regards!

2011/12/18 Marek Królikowski <ad...@wset.edu.pl>:
> I just check old my posts here and i use:
> mkfs.ocfs2 -N 2 -L MAIL /dev/dm-0
> but this same effect...
>
>
>
> -----Oryginalna wiadomość----- From: Eduardo Diaz - Gmail
> Sent: Sunday, December 18, 2011 8:37 PM
>
> To: Marek Królikowski
> Cc: ocfs2-users@oss.oracle.com
> Subject: Re: [Ocfs2-users] ocfs2 - Kernel panic on many write/read from
> bothservers
>
> you must use the features that you need..
>
> and the nodes? if you has 2 nodes use N 2
>
> regards!
>
> 2011/12/18 Marek Królikowski <ad...@wset.edu.pl>:
>>
>> Hey
>> Why max-features - i read in documentation to use this and number of 
>> nodes
>> i
>> give 2 and don`t help that`s why i don`t give this option - default is 16
>> ;-)
>>
>>
>>
>> -----Oryginalna wiadomość-----
>> From: Eduardo Diaz - Gmail
>> Sent: Sunday, December 18, 2011 7:12 PM
>> To: Marek Krolikowski
>> Cc: ocfs2-users@oss.oracle.com
>> Subject: Re: [Ocfs2-users] ocfs2 - Kernel panic on many write/read from
>> bothservers
>>
>> Why use max-features? use -n number_of_nodes ..
>>
>> regards!
>>
>> On Sat, Dec 17, 2011 at 4:30 PM, Marek Krolikowski <ad...@wset.edu.pl>
>> wrote:
>>>
>>> This is empty filesystem.
>>> I just create and copy/delete files from local hdd /usr to EMC storage
>>> with
>>> OCFS2.
>>> I create via command:
>>> mkfs.ocfs2 -L NEW-PLK --fs-feature-level=max-features /dev/dm-0
>>>
>>> and i run scripts on both servers:
>>> MAIL1# cat terror2.sh
>>> #!/bin/bash
>>> while true
>>> do
>>> rm -rf /mnt/EMC/MAIL1
>>> mkdir /mnt/EMC/MAIL1
>>> cp -r /usr /mnt/EMC/MAIL1
>>> rm -rf /mnt/EMC/MAIL1
>>> done;
>>>
>>> MAIL2# cat terror2.sh
>>> #!/bin/bash
>>> while true
>>> do
>>> rm -rf /mnt/EMC/MAIL2
>>> mkdir /mnt/EMC/MAIL2
>>> cp -r /usr /mnt/EMC/MAIL2
>>> rm -rf /mnt/EMC/MAIL2
>>> done;
>>>
>>> Thanks for help
>>>
>>>
>>> -----Oryginalna wiadomość-----
>>> From: Eduardo Diaz - Gmail
>>> Sent: Saturday, December 17, 2011 2:21 PM
>>> To: Marek Królikowski
>>> Cc: ocfs2-users@oss.oracle.com
>>> Subject: Re: [Ocfs2-users] ocfs2 - Kernel panic on many write/read from
>>> bothservers
>>>
>>> Save you data and, recreate the filesystem again and restore??
>>> regards!
>>>
>>> 2011/12/17 Marek Królikowski <ad...@wset.edu.pl>:
>>>>
>>>> I just check what You say and i download and compile:
>>>>
>>>> http://public-yum.oracle.com/repo/OracleLinux/OL6/2/base/x86_64/kernel-uek-2.6.32-300.3.1.el6uek.src.rpm
>>>> with x86_64 config file from this RPM
>>>> But almost this same effect:
>>>> INFO: task rm:32379 blocked for more than 120 seconds.
>>>> "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this
>>>> message.
>>>> rm            D 000000010099fb65     0 32379  16422 0x00000000
>>>> ffff880252d31bc8 0000000000000082 ffff880252d31b18 ffffffff81450ce3
>>>> ffff880252d31b48 ffff880252d2e400 0000000000014c00 ffff880252d31fd8
>>>> 0000000000014c00 0000000000014c00 0000000000014c00 ffff880252d31fd8
>>>> Call Trace:
>>>> [<ffffffff81450ce3>] ? _spin_unlock_irqrestore+0x17/0x19
>>>> [<ffffffffa086fd73>] ? ocfs2_read_blocks+0x67e/0x760 [ocfs2]
>>>> [<ffffffff8144fedf>] __mutex_lock_common.clone.3+0x139/0x1a0
>>>> [<ffffffffa08bed15>] ? ocfs2_get_system_file_inode+0x5d/0x1c0 [ocfs2]
>>>> [<ffffffff8144ff59>] __mutex_lock_slowpath+0x13/0x15
>>>> [<ffffffff8144fd8c>] mutex_lock+0x23/0x3d
>>>> [<ffffffffa089b4f6>] ocfs2_lookup_lock_orphan_dir+0xa8/0x168 [ocfs2]
>>>> [<ffffffff8108872d>] ? __raw_local_irq_save+0x1d/0x23
>>>> [<ffffffffa089b9e5>] ocfs2_prepare_orphan_dir+0x3d/0x1fb [ocfs2]
>>>> [<ffffffffa089c54a>] ocfs2_unlink+0x544/0xaa9 [ocfs2]
>>>> [<ffffffff81044abc>] ? need_resched+0x23/0x2d
>>>> [<ffffffff81124903>] vfs_unlink+0x7a/0xb7
>>>> [<ffffffff81125436>] do_unlinkat+0xd1/0x15f
>>>> [<ffffffff8111a2d3>] ? fput+0x26/0x2b
>>>> [<ffffffff81013ac4>] ? math_state_restore+0x52/0x57
>>>> [<ffffffff81126656>] sys_unlinkat+0x29/0x2b
>>>> [<ffffffff81011db2>] system_call_fastpath+0x16/0x1b
>>>>
>>>> Have a nice weekend.
>>>>
>>>> -----Oryginalna wiadomość-----
>>>> From: Kushnir, Michael (NIH/NLM/LHC) [C]
>>>> Sent: Friday, December 16, 2011 5:34 PM
>>>> To: Marek Królikowski ; ocfs2-users@oss.oracle.com
>>>> Subject: RE: [Ocfs2-users] ocfs2 - Kernel panic on many write/read from
>>>> bothservers
>>>>
>>>> I was under the impression that OCFS2 1.6 only works with UEK...
>>>>
>>>> Thanks,
>>>> Michael
>>>>
>>>>
>>>>
>>>> -----Original Message-----
>>>> From: Marek Królikowski [mailto:ad...@wset.edu.pl]
>>>> Sent: Friday, December 16, 2011 11:27 AM
>>>> To: ocfs2-users@oss.oracle.com
>>>> Subject: Re: [Ocfs2-users] ocfs2 - Kernel panic on many write/read from
>>>> both
>>>> servers
>>>>
>>>> I got everything new on server and this is a problem - on old servers
>>>> OCFS2
>>>> working good so new version ocfs2 or kernel got BUG.
>>>> Check kernel 3.1.X and ocfs-tools 1.6.X and u see u get kernel panic i
>>>> check
>>>> forums and many ppl got this same effect on new instances of OCFS2
>>>> clusters..
>>>>
>>>> -----Oryginalna wiadomość-----
>>>> From: Eduardo Diaz - Gmail
>>>> Sent: Friday, December 16, 2011 4:34 PM
>>>> To: Marek Królikowski
>>>> Cc: ocfs2-users@oss.oracle.com
>>>> Subject: Re: [Ocfs2-users] ocfs2 - Kernel panic on many write/read from
>>>> both
>>>> servers
>>>>
>>>> My recomendation is upgrade all y recreate the filesystem in the
>>>> cluster..
>>>>
>>>> If you need profesional help please make a support or use 
>>>> profesional...
>>>> ocfs2 is very hard to use if you don't know how to use it..
>>>>
>>>> the list is for help, or not, but it is free..
>>>>
>>>> 2011/12/15 Marek Królikowski <ad...@wset.edu.pl>:
>>>>>
>>>>> Anyone can help wme with this?
>>>>>
>>>>>
>>>>>
>>>>> -----Oryginalna wiadomość-----
>>>>> From: Marek Królikowski
>>>>> Sent: Sunday, December 04, 2011 11:15 AM
>>>>> To: ocfs2-users@oss.oracle.com
>>>>> Subject: ocfs2 - Kernel panic on many write/read from both servers
>>>>>
>>>>> I do for all night tests with write/read files from ocfs2 on both
>>>>> servers something like this:
>>>>> On MAIL1 server:
>>>>> #!/bin/bash
>>>>> while true
>>>>> do
>>>>> rm -rf /mnt/EMC/MAIL1
>>>>> mkdir /mnt/EMC/MAIL1
>>>>> cp -r /usr /mnt/EMC/MAIL1
>>>>> rm -rf /mnt/EMC/MAIL1
>>>>> done;
>>>>> On MAIL2 server:
>>>>> #!/bin/bash
>>>>> while true
>>>>> do
>>>>> rm -rf /mnt/EMC/MAIL2
>>>>> mkdir /mnt/EMC/MAIL2
>>>>> cp -r /usr /mnt/EMC/MAIL2
>>>>> rm -rf /mnt/EMC/MAIL2
>>>>> done;
>>>>>
>>>>> Today i check logs and see:
>>>>> o2dlm: Node 1 joins domain EAC7942B71964050AE2046D3F0CDD7B2
>>>>> o2dlm: Nodes in domain EAC7942B71964050AE2046D3F0CDD7B2: 0 1
>>>>> (rm,26136,0):ocfs2_unlink:953 ERROR: status = -2
>>>>> (touch,26137,0):ocfs2_check_dir_for_entry:2120 ERROR: status = -17
>>>>> (touch,26137,0):ocfs2_mknod:461 ERROR: status = -17
>>>>> (touch,26137,0):ocfs2_create:631 ERROR: status = -17
>>>>> (rm,26142,0):ocfs2_unlink:953 ERROR: status = -2
>>>>> INFO: task kworker/u:2:20246 blocked for more than 120 seconds.
>>>>> "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this
>>>>> message.
>>>>> kworker/u:2     D ffff88107f4525c0     0 20246      2 0x00000000
>>>>> ffff880b730b57d0 0000000000000046 ffff8810201297d0 00000000000125c0
>>>>> ffff880f5a399fd8 00000000000125c0 00000000000125c0 00000000000125c0
>>>>> ffff880f5a398000 00000000000125c0 ffff880f5a399fd8 00000000000125c0
>>>>> Call Trace:
>>>>> [<ffffffff81481b71>] ? __mutex_lock_slowpath+0xd1/0x140
>>>>> [<ffffffff814818d3>] ? mutex_lock+0x23/0x40 [<ffffffffa0937d95>] ?
>>>>> ocfs2_wipe_inode+0x105/0x690 [ocfs2] [<ffffffffa0935cfb>] ?
>>>>> ocfs2_query_inode_wipe.clone.9+0xcb/0x370 [ocfs2] [<ffffffffa09385a4>]
>>>>> ? ocfs2_delete_inode+0x284/0x3f0 [ocfs2] [<ffffffffa0919a10>] ?
>>>>> ocfs2_dentry_attach_lock+0x5a0/0x5a0 [ocfs2] [<ffffffffa093872e>] ?
>>>>> ocfs2_evict_inode+0x1e/0x50 [ocfs2] [<ffffffff81145900>] ?
>>>>> evict+0x70/0x140 [<ffffffffa0919322>] ?
>>>>> __ocfs2_drop_dl_inodes.clone.2+0x32/0x60 [ocfs2] [<ffffffffa0919a39>]
>>>>> ? ocfs2_drop_dl_inodes+0x29/0x90 [ocfs2] [<ffffffff8106e56f>] ?
>>>>> process_one_work+0x11f/0x440 [<ffffffff8106f279>] ?
>>>>> worker_thread+0x159/0x330 [<ffffffff8106f120>] ?
>>>>> manage_workers.clone.21+0x120/0x120
>>>>> [<ffffffff8106f120>] ? manage_workers.clone.21+0x120/0x120
>>>>> [<ffffffff81073fa6>] ? kthread+0x96/0xa0 [<ffffffff8148bb24>] ?
>>>>> kernel_thread_helper+0x4/0x10 [<ffffffff81073f10>] ?
>>>>> kthread_worker_fn+0x1a0/0x1a0 [<ffffffff8148bb20>] ?
>>>>> gs_change+0x13/0x13
>>>>> INFO: task rm:5192 blocked for more than 120 seconds.
>>>>> "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this
>>>>> message.
>>>>> rm              D ffff88107f2725c0     0  5192  16338 0x00000000
>>>>> ffff881014ccb040 0000000000000082 ffff8810206b8040 00000000000125c0
>>>>> ffff8804d7697fd8 00000000000125c0 00000000000125c0 00000000000125c0
>>>>> ffff8804d7696000 00000000000125c0 ffff8804d7697fd8 00000000000125c0
>>>>> Call Trace:
>>>>> [<ffffffff8148148d>] ? schedule_timeout+0x1ed/0x2e0
>>>>> [<ffffffffa0886162>] ? dlmconvert_master+0xe2/0x190 [ocfs2_dlm]
>>>>> [<ffffffffa08878bf>] ? dlmlock+0x7f/0xb70 [ocfs2_dlm]
>>>>> [<ffffffff81480e0a>] ? wait_for_common+0x13a/0x190
>>>>> [<ffffffff8104bc50>] ? try_to_wake_up+0x280/0x280 [<ffffffffa0928a38>]
>>>>> ? __ocfs2_cluster_lock.clone.21+0x1d8/0x6b0 [ocfs2]
>>>>> [<ffffffffa0928fcc>] ? ocfs2_inode_lock_full_nested+0xbc/0x490 [ocfs2]
>>>>> [<ffffffffa0943c1b>] ? ocfs2_lookup_lock_orphan_dir+0x6b/0x1b0 [ocfs2]
>>>>> [<ffffffffa09454ba>] ? ocfs2_prepare_orphan_dir+0x4a/0x280 [ocfs2]
>>>>> [<ffffffffa094616f>] ? ocfs2_unlink+0x6ef/0xb90 [ocfs2]
>>>>> [<ffffffff811b35a9>] ? may_link.clone.22+0xd9/0x170
>>>>> [<ffffffff8113aa58>] ? vfs_unlink+0x98/0x100 [<ffffffff8113ac41>] ?
>>>>> do_unlinkat+0x181/0x1b0 [<ffffffff8113e7cd>] ? vfs_readdir+0x9d/0xe0
>>>>> [<ffffffff811653d8>] ? fsnotify_find_inode_mark+0x28/0x40
>>>>> [<ffffffff81166324>] ? dnotify_flush+0x54/0x110 [<ffffffff8112b07f>] ?
>>>>> filp_close+0x5f/0x90 [<ffffffff8148aa12>] ?
>>>>> system_call_fastpath+0x16/0x1b
>>>>> INFO: task kworker/u:2:20246 blocked for more than 120 seconds.
>>>>> "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this
>>>>> message.
>>>>> kworker/u:2     D ffff88107f4525c0     0 20246      2 0x00000000
>>>>> ffff880b730b57d0 0000000000000046 ffff8810201297d0 00000000000125c0
>>>>> ffff880f5a399fd8 00000000000125c0 00000000000125c0 00000000000125c0
>>>>> ffff880f5a398000 00000000000125c0 ffff880f5a399fd8 00000000000125c0
>>>>> Call Trace:
>>>>> [<ffffffff81481b71>] ? __mutex_lock_slowpath+0xd1/0x140
>>>>> [<ffffffff814818d3>] ? mutex_lock+0x23/0x40 [<ffffffffa0937d95>] ?
>>>>> ocfs2_wipe_inode+0x105/0x690 [ocfs2] [<ffffffffa0935cfb>] ?
>>>>> ocfs2_query_inode_wipe.clone.9+0xcb/0x370 [ocfs2] [<ffffffffa09385a4>]
>>>>> ? ocfs2_delete_inode+0x284/0x3f0 [ocfs2] [<ffffffffa0919a10>] ?
>>>>> ocfs2_dentry_attach_lock+0x5a0/0x5a0 [ocfs2] [<ffffffffa093872e>] ?
>>>>> ocfs2_evict_inode+0x1e/0x50 [ocfs2] [<ffffffff81145900>] ?
>>>>> evict+0x70/0x140 [<ffffffffa0919322>] ?
>>>>> __ocfs2_drop_dl_inodes.clone.2+0x32/0x60 [ocfs2] [<ffffffffa0919a39>]
>>>>> ? ocfs2_drop_dl_inodes+0x29/0x90 [ocfs2] [<ffffffff8106e56f>] ?
>>>>> process_one_work+0x11f/0x440 [<ffffffff8106f279>] ?
>>>>> worker_thread+0x159/0x330 [<ffffffff8106f120>] ?
>>>>> manage_workers.clone.21+0x120/0x120
>>>>> [<ffffffff8106f120>] ? manage_workers.clone.21+0x120/0x120
>>>>> [<ffffffff81073fa6>] ? kthread+0x96/0xa0 [<ffffffff8148bb24>] ?
>>>>> kernel_thread_helper+0x4/0x10 [<ffffffff81073f10>] ?
>>>>> kthread_worker_fn+0x1a0/0x1a0 [<ffffffff8148bb20>] ?
>>>>> gs_change+0x13/0x13
>>>>> INFO: task rm:5192 blocked for more than 120 seconds.
>>>>> "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this
>>>>> message.
>>>>> rm              D ffff88107f2725c0     0  5192  16338 0x00000000
>>>>> ffff881014ccb040 0000000000000082 ffff8810206b8040 00000000000125c0
>>>>> ffff8804d7697fd8 00000000000125c0 00000000000125c0 00000000000125c0
>>>>> ffff8804d7696000 00000000000125c0 ffff8804d7697fd8 00000000000125c0
>>>>> Call Trace:
>>>>> [<ffffffff8148148d>] ? schedule_timeout+0x1ed/0x2e0
>>>>> [<ffffffffa0886162>] ? dlmconvert_master+0xe2/0x190 [ocfs2_dlm]
>>>>> [<ffffffffa08878bf>] ? dlmlock+0x7f/0xb70 [ocfs2_dlm]
>>>>> [<ffffffff81480e0a>] ? wait_for_common+0x13a/0x190
>>>>> [<ffffffff8104bc50>] ? try_to_wake_up+0x280/0x280 [<ffffffffa0928a38>]
>>>>> ? __ocfs2_cluster_lock.clone.21+0x1d8/0x6b0 [ocfs2]
>>>>> [<ffffffffa0928fcc>] ? ocfs2_inode_lock_full_nested+0xbc/0x490 [ocfs2]
>>>>> [<ffffffffa0943c1b>] ? ocfs2_lookup_lock_orphan_dir+0x6b/0x1b0 [ocfs2]
>>>>> [<ffffffffa09454ba>] ? ocfs2_prepare_orphan_dir+0x4a/0x280 [ocfs2]
>>>>> [<ffffffffa094616f>] ? ocfs2_unlink+0x6ef/0xb90 [ocfs2]
>>>>> [<ffffffff811b35a9>] ? may_link.clone.22+0xd9/0x170
>>>>> [<ffffffff8113aa58>] ? vfs_unlink+0x98/0x100 [<ffffffff8113ac41>] ?
>>>>> do_unlinkat+0x181/0x1b0 [<ffffffff8113e7cd>] ? vfs_readdir+0x9d/0xe0
>>>>> [<ffffffff811653d8>] ? fsnotify_find_inode_mark+0x28/0x40
>>>>> [<ffffffff81166324>] ? dnotify_flush+0x54/0x110 [<ffffffff8112b07f>] ?
>>>>> filp_close+0x5f/0x90 [<ffffffff8148aa12>] ?
>>>>> system_call_fastpath+0x16/0x1b
>>>>> INFO: task kworker/u:2:20246 blocked for more than 120 seconds.
>>>>> "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this
>>>>> message.
>>>>> kworker/u:2     D ffff88107f4525c0     0 20246      2 0x00000000
>>>>> ffff880b730b57d0 0000000000000046 ffff8810201297d0 00000000000125c0
>>>>> ffff880f5a399fd8 00000000000125c0 00000000000125c0 00000000000125c0
>>>>> ffff880f5a398000 00000000000125c0 ffff880f5a399fd8 00000000000125c0
>>>>> Call Trace:
>>>>> [<ffffffff81481b71>] ? __mutex_lock_slowpath+0xd1/0x140
>>>>> [<ffffffff814818d3>] ? mutex_lock+0x23/0x40 [<ffffffffa0937d95>] ?
>>>>> ocfs2_wipe_inode+0x105/0x690 [ocfs2] [<ffffffffa0935cfb>] ?
>>>>> ocfs2_query_inode_wipe.clone.9+0xcb/0x370 [ocfs2] [<ffffffffa09385a4>]
>>>>> ? ocfs2_delete_inode+0x284/0x3f0 [ocfs2] [<ffffffffa0919a10>] ?
>>>>> ocfs2_dentry_attach_lock+0x5a0/0x5a0 [ocfs2] [<ffffffffa093872e>] ?
>>>>> ocfs2_evict_inode+0x1e/0x50 [ocfs2] [<ffffffff81145900>] ?
>>>>> evict+0x70/0x140 [<ffffffffa0919322>] ?
>>>>> __ocfs2_drop_dl_inodes.clone.2+0x32/0x60 [ocfs2] [<ffffffffa0919a39>]
>>>>> ? ocfs2_drop_dl_inodes+0x29/0x90 [ocfs2] [<ffffffff8106e56f>] ?
>>>>> process_one_work+0x11f/0x440 [<ffffffff8106f279>] ?
>>>>> worker_thread+0x159/0x330 [<ffffffff8106f120>] ?
>>>>> manage_workers.clone.21+0x120/0x120
>>>>> [<ffffffff8106f120>] ? manage_workers.clone.21+0x120/0x120
>>>>> [<ffffffff81073fa6>] ? kthread+0x96/0xa0 [<ffffffff8148bb24>] ?
>>>>> kernel_thread_helper+0x4/0x10 [<ffffffff81073f10>] ?
>>>>> kthread_worker_fn+0x1a0/0x1a0 [<ffffffff8148bb20>] ?
>>>>> gs_change+0x13/0x13
>>>>> INFO: task rm:5192 blocked for more than 120 seconds.
>>>>> "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this
>>>>> message.
>>>>> rm              D ffff88107f2725c0     0  5192  16338 0x00000000
>>>>> ffff881014ccb040 0000000000000082 ffff8810206b8040 00000000000125c0
>>>>> ffff8804d7697fd8 00000000000125c0 00000000000125c0 00000000000125c0
>>>>> ffff8804d7696000 00000000000125c0 ffff8804d7697fd8 00000000000125c0
>>>>> Call Trace:
>>>>> [<ffffffff8148148d>] ? schedule_timeout+0x1ed/0x2e0
>>>>> [<ffffffffa0886162>] ? dlmconvert_master+0xe2/0x190 [ocfs2_dlm]
>>>>> [<ffffffffa08878bf>] ? dlmlock+0x7f/0xb70 [ocfs2_dlm]
>>>>> [<ffffffff81480e0a>] ? wait_for_common+0x13a/0x190
>>>>> [<ffffffff8104bc50>] ? try_to_wake_up+0x280/0x280 [<ffffffffa0928a38>]
>>>>> ? __ocfs2_cluster_lock.clone.21+0x1d8/0x6b0 [ocfs2]
>>>>> [<ffffffffa0928fcc>] ? ocfs2_inode_lock_full_nested+0xbc/0x490 [ocfs2]
>>>>> [<ffffffffa0943c1b>] ? ocfs2_lookup_lock_orphan_dir+0x6b/0x1b0 [ocfs2]
>>>>> [<ffffffffa09454ba>] ? ocfs2_prepare_orphan_dir+0x4a/0x280 [ocfs2]
>>>>> [<ffffffffa094616f>] ? ocfs2_unlink+0x6ef/0xb90 [ocfs2]
>>>>> [<ffffffff811b35a9>] ? may_link.clone.22+0xd9/0x170
>>>>> [<ffffffff8113aa58>] ? vfs_unlink+0x98/0x100 [<ffffffff8113ac41>] ?
>>>>> do_unlinkat+0x181/0x1b0 [<ffffffff8113e7cd>] ? vfs_readdir+0x9d/0xe0
>>>>> [<ffffffff811653d8>] ? fsnotify_find_inode_mark+0x28/0x40
>>>>> [<ffffffff81166324>] ? dnotify_flush+0x54/0x110 [<ffffffff8112b07f>] ?
>>>>> filp_close+0x5f/0x90 [<ffffffff8148aa12>] ?
>>>>> system_call_fastpath+0x16/0x1b
>>>>> INFO: task kworker/u:2:20246 blocked for more than 120 seconds.
>>>>> "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this
>>>>> message.
>>>>> kworker/u:2     D ffff88107f4525c0     0 20246      2 0x00000000
>>>>> ffff880b730b57d0 0000000000000046 ffff8810201297d0 00000000000125c0
>>>>> ffff880f5a399fd8 00000000000125c0 00000000000125c0 00000000000125c0
>>>>> ffff880f5a398000 00000000000125c0 ffff880f5a399fd8 00000000000125c0
>>>>> Call Trace:
>>>>> [<ffffffff81481b71>] ? __mutex_lock_slowpath+0xd1/0x140
>>>>> [<ffffffff814818d3>] ? mutex_lock+0x23/0x40 [<ffffffffa0937d95>] ?
>>>>> ocfs2_wipe_inode+0x105/0x690 [ocfs2] [<ffffffffa0935cfb>] ?
>>>>> ocfs2_query_inode_wipe.clone.9+0xcb/0x370 [ocfs2] [<ffffffffa09385a4>]
>>>>> ? ocfs2_delete_inode+0x284/0x3f0 [ocfs2] [<ffffffffa0919a10>] ?
>>>>> ocfs2_dentry_attach_lock+0x5a0/0x5a0 [ocfs2] [<ffffffffa093872e>] ?
>>>>> ocfs2_evict_inode+0x1e/0x50 [ocfs2] [<ffffffff81145900>] ?
>>>>> evict+0x70/0x140 [<ffffffffa0919322>] ?
>>>>> __ocfs2_drop_dl_inodes.clone.2+0x32/0x60 [ocfs2] [<ffffffffa0919a39>]
>>>>> ? ocfs2_drop_dl_inodes+0x29/0x90 [ocfs2] [<ffffffff8106e56f>] ?
>>>>> process_one_work+0x11f/0x440 [<ffffffff8106f279>] ?
>>>>> worker_thread+0x159/0x330 [<ffffffff8106f120>] ?
>>>>> manage_workers.clone.21+0x120/0x120
>>>>> [<ffffffff8106f120>] ? manage_workers.clone.21+0x120/0x120
>>>>> [<ffffffff81073fa6>] ? kthread+0x96/0xa0 [<ffffffff8148bb24>] ?
>>>>> kernel_thread_helper+0x4/0x10 [<ffffffff81073f10>] ?
>>>>> kthread_worker_fn+0x1a0/0x1a0 [<ffffffff8148bb20>] ?
>>>>> gs_change+0x13/0x13
>>>>> INFO: task rm:5192 blocked for more than 120 seconds.
>>>>> "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this
>>>>> message.
>>>>> rm              D ffff88107f2725c0     0  5192  16338 0x00000000
>>>>> ffff881014ccb040 0000000000000082 ffff8810206b8040 00000000000125c0
>>>>> ffff8804d7697fd8 00000000000125c0 00000000000125c0 00000000000125c0
>>>>> ffff8804d7696000 00000000000125c0 ffff8804d7697fd8 00000000000125c0
>>>>> Call Trace:
>>>>> [<ffffffff8148148d>] ? schedule_timeout+0x1ed/0x2e0
>>>>> [<ffffffffa0886162>] ? dlmconvert_master+0xe2/0x190 [ocfs2_dlm]
>>>>> [<ffffffffa08878bf>] ? dlmlock+0x7f/0xb70 [ocfs2_dlm]
>>>>> [<ffffffff81480e0a>] ? wait_for_common+0x13a/0x190
>>>>> [<ffffffff8104bc50>] ? try_to_wake_up+0x280/0x280 [<ffffffffa0928a38>]
>>>>> ? __ocfs2_cluster_lock.clone.21+0x1d8/0x6b0 [ocfs2]
>>>>> [<ffffffffa0928fcc>] ? ocfs2_inode_lock_full_nested+0xbc/0x490 [ocfs2]
>>>>> [<ffffffffa0943c1b>] ? ocfs2_lookup_lock_orphan_dir+0x6b/0x1b0 [ocfs2]
>>>>> [<ffffffffa09454ba>] ? ocfs2_prepare_orphan_dir+0x4a/0x280 [ocfs2]
>>>>> [<ffffffffa094616f>] ? ocfs2_unlink+0x6ef/0xb90 [ocfs2]
>>>>> [<ffffffff811b35a9>] ? may_link.clone.22+0xd9/0x170
>>>>> [<ffffffff8113aa58>] ? vfs_unlink+0x98/0x100 [<ffffffff8113ac41>] ?
>>>>> do_unlinkat+0x181/0x1b0 [<ffffffff8113e7cd>] ? vfs_readdir+0x9d/0xe0
>>>>> [<ffffffff811653d8>] ? fsnotify_find_inode_mark+0x28/0x40
>>>>> [<ffffffff81166324>] ? dnotify_flush+0x54/0x110 [<ffffffff8112b07f>] ?
>>>>> filp_close+0x5f/0x90 [<ffffffff8148aa12>] ?
>>>>> system_call_fastpath+0x16/0x1b
>>>>> INFO: task kworker/u:2:20246 blocked for more than 120 seconds.
>>>>> "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this
>>>>> message.
>>>>> kworker/u:2     D ffff88107f4525c0     0 20246      2 0x00000000
>>>>> ffff880b730b57d0 0000000000000046 ffff8810201297d0 00000000000125c0
>>>>> ffff880f5a399fd8 00000000000125c0 00000000000125c0 00000000000125c0
>>>>> ffff880f5a398000 00000000000125c0 ffff880f5a399fd8 00000000000125c0
>>>>> Call Trace:
>>>>> [<ffffffff81481b71>] ? __mutex_lock_slowpath+0xd1/0x140
>>>>> [<ffffffff814818d3>] ? mutex_lock+0x23/0x40 [<ffffffffa0937d95>] ?
>>>>> ocfs2_wipe_inode+0x105/0x690 [ocfs2] [<ffffffffa0935cfb>] ?
>>>>> ocfs2_query_inode_wipe.clone.9+0xcb/0x370 [ocfs2] [<ffffffffa09385a4>]
>>>>> ? ocfs2_delete_inode+0x284/0x3f0 [ocfs2] [<ffffffffa0919a10>] ?
>>>>> ocfs2_dentry_attach_lock+0x5a0/0x5a0 [ocfs2] [<ffffffffa093872e>] ?
>>>>> ocfs2_evict_inode+0x1e/0x50 [ocfs2] [<ffffffff81145900>] ?
>>>>> evict+0x70/0x140 [<ffffffffa0919322>] ?
>>>>> __ocfs2_drop_dl_inodes.clone.2+0x32/0x60 [ocfs2] [<ffffffffa0919a39>]
>>>>> ? ocfs2_drop_dl_inodes+0x29/0x90 [ocfs2] [<ffffffff8106e56f>] ?
>>>>> process_one_work+0x11f/0x440 [<ffffffff8106f279>] ?
>>>>> worker_thread+0x159/0x330 [<ffffffff8106f120>] ?
>>>>> manage_workers.clone.21+0x120/0x120
>>>>> [<ffffffff8106f120>] ? manage_workers.clone.21+0x120/0x120
>>>>> [<ffffffff81073fa6>] ? kthread+0x96/0xa0 [<ffffffff8148bb24>] ?
>>>>> kernel_thread_helper+0x4/0x10 [<ffffffff81073f10>] ?
>>>>> kthread_worker_fn+0x1a0/0x1a0 [<ffffffff8148bb20>] ?
>>>>> gs_change+0x13/0x13
>>>>> INFO: task rm:5192 blocked for more than 120 seconds.
>>>>> "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this
>>>>> message.
>>>>> rm              D ffff88107f2725c0     0  5192  16338 0x00000000
>>>>> ffff881014ccb040 0000000000000082 ffff8810206b8040 00000000000125c0
>>>>> ffff8804d7697fd8 00000000000125c0 00000000000125c0 00000000000125c0
>>>>> ffff8804d7696000 00000000000125c0 ffff8804d7697fd8 00000000000125c0
>>>>> Call Trace:
>>>>> [<ffffffff8148148d>] ? schedule_timeout+0x1ed/0x2e0
>>>>> [<ffffffffa0886162>] ? dlmconvert_master+0xe2/0x190 [ocfs2_dlm]
>>>>> [<ffffffffa08878bf>] ? dlmlock+0x7f/0xb70 [ocfs2_dlm]
>>>>> [<ffffffff81480e0a>] ? wait_for_common+0x13a/0x190
>>>>> [<ffffffff8104bc50>] ? try_to_wake_up+0x280/0x280 [<ffffffffa0928a38>]
>>>>> ? __ocfs2_cluster_lock.clone.21+0x1d8/0x6b0 [ocfs2]
>>>>> [<ffffffffa0928fcc>] ? ocfs2_inode_lock_full_nested+0xbc/0x490 [ocfs2]
>>>>> [<ffffffffa0943c1b>] ? ocfs2_lookup_lock_orphan_dir+0x6b/0x1b0 [ocfs2]
>>>>> [<ffffffffa09454ba>] ? ocfs2_prepare_orphan_dir+0x4a/0x280 [ocfs2]
>>>>> [<ffffffffa094616f>] ? ocfs2_unlink+0x6ef/0xb90 [ocfs2]
>>>>> [<ffffffff811b35a9>] ? may_link.clone.22+0xd9/0x170
>>>>> [<ffffffff8113aa58>] ? vfs_unlink+0x98/0x100 [<ffffffff8113ac41>] ?
>>>>> do_unlinkat+0x181/0x1b0 [<ffffffff8113e7cd>] ? vfs_readdir+0x9d/0xe0
>>>>> [<ffffffff811653d8>] ? fsnotify_find_inode_mark+0x28/0x40
>>>>> [<ffffffff81166324>] ? dnotify_flush+0x54/0x110 [<ffffffff8112b07f>] ?
>>>>> filp_close+0x5f/0x90 [<ffffffff8148aa12>] ?
>>>>> system_call_fastpath+0x16/0x1b
>>>>>
>>>>>
>>>>> _______________________________________________
>>>>> Ocfs2-users mailing list
>>>>> Ocfs2-users@oss.oracle.com
>>>>> http://oss.oracle.com/mailman/listinfo/ocfs2-users
>>>>
>>>>
>>>>
>>>> _______________________________________________
>>>> Ocfs2-users mailing list
>>>> Ocfs2-users@oss.oracle.com
>>>> http://oss.oracle.com/mailman/listinfo/ocfs2-users
>>>>
>>>>
>>>> _______________________________________________
>>>> Ocfs2-users mailing list
>>>> Ocfs2-users@oss.oracle.com
>>>> http://oss.oracle.com/mailman/listinfo/ocfs2-users
>>>
>>>
>>>
>>> _______________________________________________
>>> Ocfs2-users mailing list
>>> Ocfs2-users@oss.oracle.com
>>> http://oss.oracle.com/mailman/listinfo/ocfs2-users
>>
>>
>>
>> _______________________________________________
>> Ocfs2-users mailing list
>> Ocfs2-users@oss.oracle.com
>> http://oss.oracle.com/mailman/listinfo/ocfs2-users
>
> 


_______________________________________________
Ocfs2-users mailing list
Ocfs2-users@oss.oracle.com
http://oss.oracle.com/mailman/listinfo/ocfs2-users

Reply via email to