2.6.33.3: possible recursive locking detected

2010-05-04 Thread CaT
I'm currently running 2.6.33.3 in a KVM instance emulating a core2duo
on 1 cpu with virtio HDs running on top of a core2duo host running 2.6.33.3.
qemu-kvm version 0.12.3. When doing:

echo noop >/sys/block/vdd/queue/scheduler

I got:

[ 1424.438241] =
[ 1424.439588] [ INFO: possible recursive locking detected ]
[ 1424.440368] 2.6.33.3-moocow.20100429-142641 #2
[ 1424.440960] -
[ 1424.440960] bash/2186 is trying to acquire lock:
[ 1424.440960]  (s_active){.+}, at: [] 
sysfs_remove_dir+0x75/0x88
[ 1424.440960] 
[ 1424.440960] but task is already holding lock:
[ 1424.440960]  (s_active){.+}, at: [] 
sysfs_get_active_two+0x1f/0x46
[ 1424.440960] 
[ 1424.440960] other info that might help us debug this:
[ 1424.440960] 4 locks held by bash/2186:
[ 1424.440960]  #0:  (&buffer->mutex){+.+.+.}, at: [] 
sysfs_write_file+0x39/0x126
[ 1424.440960]  #1:  (s_active){.+}, at: [] 
sysfs_get_active_two+0x1f/0x46
[ 1424.440960]  #2:  (s_active){.+}, at: [] 
sysfs_get_active_two+0x2c/0x46
[ 1424.440960]  #3:  (&q->sysfs_lock){+.+.+.}, at: [] 
queue_attr_store+0x44/0x85
[ 1424.440960] 
[ 1424.440960] stack backtrace:
[ 1424.440960] Pid: 2186, comm: bash Not tainted 
2.6.33.3-moocow.20100429-142641 #2
[ 1424.440960] Call Trace:
[ 1424.440960]  [] __lock_acquire+0xf9f/0x178e
[ 1424.440960]  [] ? save_stack_trace+0x2a/0x48
[ 1424.440960]  [] ? lockdep_init_map+0x9f/0x52f
[ 1424.440960]  [] ? lockdep_init_map+0x9f/0x52f
[ 1424.440960]  [] ? trace_hardirqs_on+0xd/0xf
[ 1424.440960]  [] lock_acquire+0xca/0xef
[ 1424.440960]  [] ? sysfs_remove_dir+0x75/0x88
[ 1424.440960]  [] sysfs_addrm_finish+0xc8/0x13a
[ 1424.440960]  [] ? sysfs_remove_dir+0x75/0x88
[ 1424.440960]  [] ? trace_hardirqs_on_caller+0x110/0x134
[ 1424.440960]  [] sysfs_remove_dir+0x75/0x88
[ 1424.440960]  [] kobject_del+0x16/0x37
[ 1424.440960]  [] elv_iosched_store+0x10a/0x214
[ 1424.440960]  [] queue_attr_store+0x6a/0x85
[ 1424.440960]  [] sysfs_write_file+0xf1/0x126
[ 1424.440960]  [] vfs_write+0xae/0x14a
[ 1424.440960]  [] sys_write+0x47/0x6e
[ 1424.440960]  [] system_call_fastpath+0x16/0x1b

Original scheduler was cfq.

Having rebooted and defaulted to noop I tried

echo noop >/sys/block/vdd/queue/scheduler

and got:

[  311.294464] =
[  311.295820] [ INFO: possible recursive locking detected ]
[  311.296603] 2.6.33.3-moocow.20100429-142641 #2
[  311.296833] -
[  311.296833] bash/2190 is trying to acquire lock:
[  311.296833]  (s_active){.+}, at: [] 
remove_dir+0x31/0x39
[  311.296833] 
[  311.296833] but task is already holding lock:
[  311.296833]  (s_active){.+}, at: [] 
sysfs_get_active_two+0x1f/0x46
[  311.296833] 
[  311.296833] other info that might help us debug this:
[  311.296833] 4 locks held by bash/2190:
[  311.296833]  #0:  (&buffer->mutex){+.+.+.}, at: [] 
sysfs_write_file+0x39/0x126
[  311.296833]  #1:  (s_active){.+}, at: [] 
sysfs_get_active_two+0x1f/0x46
[  311.296833]  #2:  (s_active){.+}, at: [] 
sysfs_get_active_two+0x2c/0x46
[  311.296833]  #3:  (&q->sysfs_lock){+.+.+.}, at: [] 
queue_attr_store+0x44/0x85
[  311.296833] 
[  311.296833] stack backtrace:
[  311.296833] Pid: 2190, comm: bash Not tainted 
2.6.33.3-moocow.20100429-142641 #2
[  311.296833] Call Trace:
[  311.296833]  [] __lock_acquire+0xf9f/0x178e
[  311.296833]  [] ? lockdep_init_map+0x9f/0x52f
[  311.296833]  [] ? lockdep_init_map+0x9f/0x52f
[  311.296833]  [] ? trace_hardirqs_on+0xd/0xf
[  311.296833]  [] lock_acquire+0xca/0xef
[  311.296833]  [] ? remove_dir+0x31/0x39
[  311.296833]  [] sysfs_addrm_finish+0xc8/0x13a
[  311.296833]  [] ? remove_dir+0x31/0x39
[  311.296833]  [] ? trace_hardirqs_on_caller+0x110/0x134
[  311.296833]  [] remove_dir+0x31/0x39
[  311.296833]  [] sysfs_remove_dir+0x7d/0x88
[  311.296833]  [] kobject_del+0x16/0x37
[  311.296833]  [] elv_iosched_store+0x10a/0x214
[  311.296833]  [] queue_attr_store+0x6a/0x85
[  311.296833]  [] sysfs_write_file+0xf1/0x126
[  311.296833]  [] vfs_write+0xae/0x14a
[  311.296833]  [] sys_write+0x47/0x6e
[  311.296833]  [] system_call_fastpath+0x16/0x1b

Changing back to noop (or, in the initial case to cfq) did not
reproduce the message.

This does not happen when the elevator is explicitly set on bootup as
part of the kernel's commandline. Compiled-in default is cfq.

-- 
  "A search of his car uncovered pornography, a homemade sex aid, women's 
  stockings and a Jack Russell terrier."
- http://www.news.com.au/story/0%2C27574%2C24675808-421%2C00.html
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: 2.6.33.3: possible recursive locking detected

2010-05-04 Thread Avi Kivity

On 05/04/2010 10:03 AM, CaT wrote:

I'm currently running 2.6.33.3 in a KVM instance emulating a core2duo
on 1 cpu with virtio HDs running on top of a core2duo host running 2.6.33.3.
qemu-kvm version 0.12.3.


Doesn't appear to be related to kvm.  Copying lkml.


When doing:

echo noop>/sys/block/vdd/queue/scheduler

I got:

[ 1424.438241] =
[ 1424.439588] [ INFO: possible recursive locking detected ]
[ 1424.440368] 2.6.33.3-moocow.20100429-142641 #2
[ 1424.440960] -
[ 1424.440960] bash/2186 is trying to acquire lock:
[ 1424.440960]  (s_active){.+}, at: [] 
sysfs_remove_dir+0x75/0x88
[ 1424.440960]
[ 1424.440960] but task is already holding lock:
[ 1424.440960]  (s_active){.+}, at: [] 
sysfs_get_active_two+0x1f/0x46
[ 1424.440960]
[ 1424.440960] other info that might help us debug this:
[ 1424.440960] 4 locks held by bash/2186:
[ 1424.440960]  #0:  (&buffer->mutex){+.+.+.}, at: [] 
sysfs_write_file+0x39/0x126
[ 1424.440960]  #1:  (s_active){.+}, at: [] 
sysfs_get_active_two+0x1f/0x46
[ 1424.440960]  #2:  (s_active){.+}, at: [] 
sysfs_get_active_two+0x2c/0x46
[ 1424.440960]  #3:  (&q->sysfs_lock){+.+.+.}, at: [] 
queue_attr_store+0x44/0x85
[ 1424.440960]
[ 1424.440960] stack backtrace:
[ 1424.440960] Pid: 2186, comm: bash Not tainted 
2.6.33.3-moocow.20100429-142641 #2
[ 1424.440960] Call Trace:
[ 1424.440960]  [] __lock_acquire+0xf9f/0x178e
[ 1424.440960]  [] ? save_stack_trace+0x2a/0x48
[ 1424.440960]  [] ? lockdep_init_map+0x9f/0x52f
[ 1424.440960]  [] ? lockdep_init_map+0x9f/0x52f
[ 1424.440960]  [] ? trace_hardirqs_on+0xd/0xf
[ 1424.440960]  [] lock_acquire+0xca/0xef
[ 1424.440960]  [] ? sysfs_remove_dir+0x75/0x88
[ 1424.440960]  [] sysfs_addrm_finish+0xc8/0x13a
[ 1424.440960]  [] ? sysfs_remove_dir+0x75/0x88
[ 1424.440960]  [] ? trace_hardirqs_on_caller+0x110/0x134
[ 1424.440960]  [] sysfs_remove_dir+0x75/0x88
[ 1424.440960]  [] kobject_del+0x16/0x37
[ 1424.440960]  [] elv_iosched_store+0x10a/0x214
[ 1424.440960]  [] queue_attr_store+0x6a/0x85
[ 1424.440960]  [] sysfs_write_file+0xf1/0x126
[ 1424.440960]  [] vfs_write+0xae/0x14a
[ 1424.440960]  [] sys_write+0x47/0x6e
[ 1424.440960]  [] system_call_fastpath+0x16/0x1b

Original scheduler was cfq.

Having rebooted and defaulted to noop I tried

echo noop>/sys/block/vdd/queue/scheduler

and got:

[  311.294464] =
[  311.295820] [ INFO: possible recursive locking detected ]
[  311.296603] 2.6.33.3-moocow.20100429-142641 #2
[  311.296833] -
[  311.296833] bash/2190 is trying to acquire lock:
[  311.296833]  (s_active){.+}, at: [] 
remove_dir+0x31/0x39
[  311.296833]
[  311.296833] but task is already holding lock:
[  311.296833]  (s_active){.+}, at: [] 
sysfs_get_active_two+0x1f/0x46
[  311.296833]
[  311.296833] other info that might help us debug this:
[  311.296833] 4 locks held by bash/2190:
[  311.296833]  #0:  (&buffer->mutex){+.+.+.}, at: [] 
sysfs_write_file+0x39/0x126
[  311.296833]  #1:  (s_active){.+}, at: [] 
sysfs_get_active_two+0x1f/0x46
[  311.296833]  #2:  (s_active){.+}, at: [] 
sysfs_get_active_two+0x2c/0x46
[  311.296833]  #3:  (&q->sysfs_lock){+.+.+.}, at: [] 
queue_attr_store+0x44/0x85
[  311.296833]
[  311.296833] stack backtrace:
[  311.296833] Pid: 2190, comm: bash Not tainted 
2.6.33.3-moocow.20100429-142641 #2
[  311.296833] Call Trace:
[  311.296833]  [] __lock_acquire+0xf9f/0x178e
[  311.296833]  [] ? lockdep_init_map+0x9f/0x52f
[  311.296833]  [] ? lockdep_init_map+0x9f/0x52f
[  311.296833]  [] ? trace_hardirqs_on+0xd/0xf
[  311.296833]  [] lock_acquire+0xca/0xef
[  311.296833]  [] ? remove_dir+0x31/0x39
[  311.296833]  [] sysfs_addrm_finish+0xc8/0x13a
[  311.296833]  [] ? remove_dir+0x31/0x39
[  311.296833]  [] ? trace_hardirqs_on_caller+0x110/0x134
[  311.296833]  [] remove_dir+0x31/0x39
[  311.296833]  [] sysfs_remove_dir+0x7d/0x88
[  311.296833]  [] kobject_del+0x16/0x37
[  311.296833]  [] elv_iosched_store+0x10a/0x214
[  311.296833]  [] queue_attr_store+0x6a/0x85
[  311.296833]  [] sysfs_write_file+0xf1/0x126
[  311.296833]  [] vfs_write+0xae/0x14a
[  311.296833]  [] sys_write+0x47/0x6e
[  311.296833]  [] system_call_fastpath+0x16/0x1b

Changing back to noop (or, in the initial case to cfq) did not
reproduce the message.

This does not happen when the elevator is explicitly set on bootup as
part of the kernel's commandline. Compiled-in default is cfq.

   



--
error compiling committee.c: too many arguments to function

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: 2.6.33.3: possible recursive locking detected

2010-05-04 Thread Yong Zhang
On Tue, May 04, 2010 at 11:37:37AM +0300, Avi Kivity wrote:
> On 05/04/2010 10:03 AM, CaT wrote:
> >I'm currently running 2.6.33.3 in a KVM instance emulating a core2duo
> >on 1 cpu with virtio HDs running on top of a core2duo host running 2.6.33.3.
> >qemu-kvm version 0.12.3.

Can you try commit 6992f5334995af474c2b58d010d08bc597f0f2fe in the latest
kernel?

> 
> Doesn't appear to be related to kvm.  Copying lkml.
> 
> >When doing:
> >
> >echo noop>/sys/block/vdd/queue/scheduler
> >
> >I got:
> >
> >[ 1424.438241] =
> >[ 1424.439588] [ INFO: possible recursive locking detected ]
> >[ 1424.440368] 2.6.33.3-moocow.20100429-142641 #2
> >[ 1424.440960] -
> >[ 1424.440960] bash/2186 is trying to acquire lock:
> >[ 1424.440960]  (s_active){.+}, at: [] 
> >sysfs_remove_dir+0x75/0x88
> >[ 1424.440960]
> >[ 1424.440960] but task is already holding lock:
> >[ 1424.440960]  (s_active){.+}, at: [] 
> >sysfs_get_active_two+0x1f/0x46
> >[ 1424.440960]
> >[ 1424.440960] other info that might help us debug this:
> >[ 1424.440960] 4 locks held by bash/2186:
> >[ 1424.440960]  #0:  (&buffer->mutex){+.+.+.}, at: [] 
> >sysfs_write_file+0x39/0x126
> >[ 1424.440960]  #1:  (s_active){.+}, at: [] 
> >sysfs_get_active_two+0x1f/0x46
> >[ 1424.440960]  #2:  (s_active){.+}, at: [] 
> >sysfs_get_active_two+0x2c/0x46
> >[ 1424.440960]  #3:  (&q->sysfs_lock){+.+.+.}, at: [] 
> >queue_attr_store+0x44/0x85
> >[ 1424.440960]
> >[ 1424.440960] stack backtrace:
> >[ 1424.440960] Pid: 2186, comm: bash Not tainted 
> >2.6.33.3-moocow.20100429-142641 #2
> >[ 1424.440960] Call Trace:
> >[ 1424.440960]  [] __lock_acquire+0xf9f/0x178e
> >[ 1424.440960]  [] ? save_stack_trace+0x2a/0x48
> >[ 1424.440960]  [] ? lockdep_init_map+0x9f/0x52f
> >[ 1424.440960]  [] ? lockdep_init_map+0x9f/0x52f
> >[ 1424.440960]  [] ? trace_hardirqs_on+0xd/0xf
> >[ 1424.440960]  [] lock_acquire+0xca/0xef
> >[ 1424.440960]  [] ? sysfs_remove_dir+0x75/0x88
> >[ 1424.440960]  [] sysfs_addrm_finish+0xc8/0x13a
> >[ 1424.440960]  [] ? sysfs_remove_dir+0x75/0x88
> >[ 1424.440960]  [] ? trace_hardirqs_on_caller+0x110/0x134
> >[ 1424.440960]  [] sysfs_remove_dir+0x75/0x88
> >[ 1424.440960]  [] kobject_del+0x16/0x37
> >[ 1424.440960]  [] elv_iosched_store+0x10a/0x214
> >[ 1424.440960]  [] queue_attr_store+0x6a/0x85
> >[ 1424.440960]  [] sysfs_write_file+0xf1/0x126
> >[ 1424.440960]  [] vfs_write+0xae/0x14a
> >[ 1424.440960]  [] sys_write+0x47/0x6e
> >[ 1424.440960]  [] system_call_fastpath+0x16/0x1b
> >
> >Original scheduler was cfq.
> >
> >Having rebooted and defaulted to noop I tried
> >
> >echo noop>/sys/block/vdd/queue/scheduler
> >
> >and got:
> >
> >[  311.294464] =
> >[  311.295820] [ INFO: possible recursive locking detected ]
> >[  311.296603] 2.6.33.3-moocow.20100429-142641 #2
> >[  311.296833] -
> >[  311.296833] bash/2190 is trying to acquire lock:
> >[  311.296833]  (s_active){.+}, at: [] 
> >remove_dir+0x31/0x39
> >[  311.296833]
> >[  311.296833] but task is already holding lock:
> >[  311.296833]  (s_active){.+}, at: [] 
> >sysfs_get_active_two+0x1f/0x46
> >[  311.296833]
> >[  311.296833] other info that might help us debug this:
> >[  311.296833] 4 locks held by bash/2190:
> >[  311.296833]  #0:  (&buffer->mutex){+.+.+.}, at: [] 
> >sysfs_write_file+0x39/0x126
> >[  311.296833]  #1:  (s_active){.+}, at: [] 
> >sysfs_get_active_two+0x1f/0x46
> >[  311.296833]  #2:  (s_active){.+}, at: [] 
> >sysfs_get_active_two+0x2c/0x46
> >[  311.296833]  #3:  (&q->sysfs_lock){+.+.+.}, at: [] 
> >queue_attr_store+0x44/0x85
> >[  311.296833]
> >[  311.296833] stack backtrace:
> >[  311.296833] Pid: 2190, comm: bash Not tainted 
> >2.6.33.3-moocow.20100429-142641 #2
> >[  311.296833] Call Trace:
> >[  311.296833]  [] __lock_acquire+0xf9f/0x178e
> >[  311.296833]  [] ? lockdep_init_map+0x9f/0x52f
> >[  311.296833]  [] ? lockdep_init_map+0x9f/0x52f
> >[  311.296833]  [] ? trace_hardirqs_on+0xd/0xf
> >[  311.296833]  [] lock_acquire+0xca/0xef
> >[  311.296833]  [] ? remove_dir+0x31/0x39
> >[  311.296833]  [] sysfs_addrm_finish+0xc8/0x13a
> >[  311.296833]  [] ? remove_dir+0x31/0x39
> >[  311.296833]  [] ? trace_hardirqs_on_caller+0x110/0x134
> >[  311.296833]  [] remove_dir+0x31/0x39
> >[  311.296833]  [] sysfs_remove_dir+0x7d/0x88
> >[  311.296833]  [] kobject_del+0x16/0x37
> >[  311.296833]  [] elv_iosched_store+0x10a/0x214
> >[  311.296833]  [] queue_attr_store+0x6a/0x85
> >[  311.296833]  [] sysfs_write_file+0xf1/0x126
> >[  311.296833]  [] vfs_write+0xae/0x14a
> >[  311.296833]  [] sys_write+0x47/0x6e
> >[  311.296833]  [] system_call_fastpath+0x16/0x1b
> >
> >Changing back to noop (or, in the initial case to cfq) did not
> >reproduce the message.
> >
> >This does not happen when the elevator is explicitly set on bootup as
> >part of the kernel's commandline. Compiled-in de

Re: 2.6.33.3: possible recursive locking detected

2010-05-04 Thread Américo Wang
On Wed, May 5, 2010 at 10:32 AM, Yong Zhang  wrote:
> On Tue, May 04, 2010 at 11:37:37AM +0300, Avi Kivity wrote:
>> On 05/04/2010 10:03 AM, CaT wrote:
>> >I'm currently running 2.6.33.3 in a KVM instance emulating a core2duo
>> >on 1 cpu with virtio HDs running on top of a core2duo host running 2.6.33.3.
>> >qemu-kvm version 0.12.3.
>
> Can you try commit 6992f5334995af474c2b58d010d08bc597f0f2fe in the latest
> kernel?
>

Hmm, 2.6.33 -stable has commit 846f99749ab68bbc7f75c74fec305de675b1a1bf?

Actually, these 3 commits fixed it:

6992f5334995af474c2b58d010d08bc597f0f2fe sysfs: Use one lockdep class
per sysfs ttribute.
a2db6842873c8e5a70652f278d469128cb52db70 sysfs: Only take active
references on attributes.
e72ceb8ccac5f770b3e696e09bb673dca7024b20 sysfs: Remove sysfs_get/put_active_two

However, there are many other patches needed to amend these, so I think
it's not suitable for -stable to include, perhaps a revert of
846f99749ab68bbc7f75c74fec305de675b1a1bf is better.

Adding Greg into Cc.

Thanks.
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: 2.6.33.3: possible recursive locking detected

2010-05-11 Thread CaT
On Wed, May 05, 2010 at 10:52:50AM +0800, Américo Wang wrote:
> On Wed, May 5, 2010 at 10:32 AM, Yong Zhang  wrote:
> > On Tue, May 04, 2010 at 11:37:37AM +0300, Avi Kivity wrote:
> >> On 05/04/2010 10:03 AM, CaT wrote:
> >> >I'm currently running 2.6.33.3 in a KVM instance emulating a core2duo
> >> >on 1 cpu with virtio HDs running on top of a core2duo host running 
> >> >2.6.33.3.
> >> >qemu-kvm version 0.12.3.
> >
> > Can you try commit 6992f5334995af474c2b58d010d08bc597f0f2fe in the latest
> > kernel?
> >
> 
> Hmm, 2.6.33 -stable has commit 846f99749ab68bbc7f75c74fec305de675b1a1bf?
> 
> Actually, these 3 commits fixed it:
> 
> 6992f5334995af474c2b58d010d08bc597f0f2fe sysfs: Use one lockdep class
> per sysfs ttribute.
> a2db6842873c8e5a70652f278d469128cb52db70 sysfs: Only take active
> references on attributes.
> e72ceb8ccac5f770b3e696e09bb673dca7024b20 sysfs: Remove 
> sysfs_get/put_active_two
> 
> However, there are many other patches needed to amend these, so I think
> it's not suitable for -stable to include, perhaps a revert of
> 846f99749ab68bbc7f75c74fec305de675b1a1bf is better.

Slightly at a loss as to what to do, now. It's a virt instance so I can
apply patches at will but, well, clarity is good. :)

-- 
  "A search of his car uncovered pornography, a homemade sex aid, women's 
  stockings and a Jack Russell terrier."
- http://www.news.com.au/story/0%2C27574%2C24675808-421%2C00.html
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: 2.6.33.3: possible recursive locking detected

2010-05-11 Thread Greg KH
On Tue, May 11, 2010 at 09:33:50PM +1000, CaT wrote:
> On Wed, May 05, 2010 at 10:52:50AM +0800, Américo Wang wrote:
> > On Wed, May 5, 2010 at 10:32 AM, Yong Zhang  
> > wrote:
> > > On Tue, May 04, 2010 at 11:37:37AM +0300, Avi Kivity wrote:
> > >> On 05/04/2010 10:03 AM, CaT wrote:
> > >> >I'm currently running 2.6.33.3 in a KVM instance emulating a core2duo
> > >> >on 1 cpu with virtio HDs running on top of a core2duo host running 
> > >> >2.6.33.3.
> > >> >qemu-kvm version 0.12.3.
> > >
> > > Can you try commit 6992f5334995af474c2b58d010d08bc597f0f2fe in the latest
> > > kernel?
> > >
> > 
> > Hmm, 2.6.33 -stable has commit 846f99749ab68bbc7f75c74fec305de675b1a1bf?
> > 
> > Actually, these 3 commits fixed it:
> > 
> > 6992f5334995af474c2b58d010d08bc597f0f2fe sysfs: Use one lockdep class
> > per sysfs ttribute.
> > a2db6842873c8e5a70652f278d469128cb52db70 sysfs: Only take active
> > references on attributes.
> > e72ceb8ccac5f770b3e696e09bb673dca7024b20 sysfs: Remove 
> > sysfs_get/put_active_two
> > 
> > However, there are many other patches needed to amend these, so I think
> > it's not suitable for -stable to include, perhaps a revert of
> > 846f99749ab68bbc7f75c74fec305de675b1a1bf is better.
> 
> Slightly at a loss as to what to do, now. It's a virt instance so I can
> apply patches at will but, well, clarity is good. :)

Just ignore the lockdep warnings as they are bogus, or turn them off, or
use .34-rc7, as they are resolved there.

thanks,

greg k-h
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: 2.6.33.3: possible recursive locking detected

2010-05-11 Thread Américo Wang
On Tue, May 11, 2010 at 08:03:20AM -0700, Greg KH wrote:
>On Tue, May 11, 2010 at 09:33:50PM +1000, CaT wrote:
>> On Wed, May 05, 2010 at 10:52:50AM +0800, Américo Wang wrote:
>> > On Wed, May 5, 2010 at 10:32 AM, Yong Zhang  
>> > wrote:
>> > > On Tue, May 04, 2010 at 11:37:37AM +0300, Avi Kivity wrote:
>> > >> On 05/04/2010 10:03 AM, CaT wrote:
>> > >> >I'm currently running 2.6.33.3 in a KVM instance emulating a core2duo
>> > >> >on 1 cpu with virtio HDs running on top of a core2duo host running 
>> > >> >2.6.33.3.
>> > >> >qemu-kvm version 0.12.3.
>> > >
>> > > Can you try commit 6992f5334995af474c2b58d010d08bc597f0f2fe in the latest
>> > > kernel?
>> > >
>> > 
>> > Hmm, 2.6.33 -stable has commit 846f99749ab68bbc7f75c74fec305de675b1a1bf?
>> > 
>> > Actually, these 3 commits fixed it:
>> > 
>> > 6992f5334995af474c2b58d010d08bc597f0f2fe sysfs: Use one lockdep class
>> > per sysfs ttribute.
>> > a2db6842873c8e5a70652f278d469128cb52db70 sysfs: Only take active
>> > references on attributes.
>> > e72ceb8ccac5f770b3e696e09bb673dca7024b20 sysfs: Remove 
>> > sysfs_get/put_active_two
>> > 
>> > However, there are many other patches needed to amend these, so I think
>> > it's not suitable for -stable to include, perhaps a revert of
>> > 846f99749ab68bbc7f75c74fec305de675b1a1bf is better.
>> 
>> Slightly at a loss as to what to do, now. It's a virt instance so I can
>> apply patches at will but, well, clarity is good. :)
>
>Just ignore the lockdep warnings as they are bogus, or turn them off, or
>use .34-rc7, as they are resolved there.
>

How about reverting that patch for 2.6.33 stable tree?

Thanks.
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: 2.6.33.3: possible recursive locking detected

2010-05-12 Thread Greg KH
On Wed, May 12, 2010 at 12:34:20PM +0800, Américo Wang wrote:
> On Tue, May 11, 2010 at 08:03:20AM -0700, Greg KH wrote:
> >On Tue, May 11, 2010 at 09:33:50PM +1000, CaT wrote:
> >> On Wed, May 05, 2010 at 10:52:50AM +0800, Américo Wang wrote:
> >> > On Wed, May 5, 2010 at 10:32 AM, Yong Zhang  
> >> > wrote:
> >> > > On Tue, May 04, 2010 at 11:37:37AM +0300, Avi Kivity wrote:
> >> > >> On 05/04/2010 10:03 AM, CaT wrote:
> >> > >> >I'm currently running 2.6.33.3 in a KVM instance emulating a core2duo
> >> > >> >on 1 cpu with virtio HDs running on top of a core2duo host running 
> >> > >> >2.6.33.3.
> >> > >> >qemu-kvm version 0.12.3.
> >> > >
> >> > > Can you try commit 6992f5334995af474c2b58d010d08bc597f0f2fe in the 
> >> > > latest
> >> > > kernel?
> >> > >
> >> > 
> >> > Hmm, 2.6.33 -stable has commit 846f99749ab68bbc7f75c74fec305de675b1a1bf?
> >> > 
> >> > Actually, these 3 commits fixed it:
> >> > 
> >> > 6992f5334995af474c2b58d010d08bc597f0f2fe sysfs: Use one lockdep class
> >> > per sysfs ttribute.
> >> > a2db6842873c8e5a70652f278d469128cb52db70 sysfs: Only take active
> >> > references on attributes.
> >> > e72ceb8ccac5f770b3e696e09bb673dca7024b20 sysfs: Remove 
> >> > sysfs_get/put_active_two
> >> > 
> >> > However, there are many other patches needed to amend these, so I think
> >> > it's not suitable for -stable to include, perhaps a revert of
> >> > 846f99749ab68bbc7f75c74fec305de675b1a1bf is better.
> >> 
> >> Slightly at a loss as to what to do, now. It's a virt instance so I can
> >> apply patches at will but, well, clarity is good. :)
> >
> >Just ignore the lockdep warnings as they are bogus, or turn them off, or
> >use .34-rc7, as they are resolved there.
> >
> 
> How about reverting that patch for 2.6.33 stable tree?

No, as that patch is not reverted in Linus's tree, right?  Just turn off
lockdep if this is bothering you.

thanks,

greg k-h
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html