------- Comment #4 from olh at suse dot de 2005-11-03 12:58 ------- What I have found so far with even more debugging:
<4>Processor 1 found. <7>schedule(2883) swapper(0):c1,j4294892318 c0000000142ae100 <6>Brought up 2 CPUs <7>_spin_lock_irq(85) swapper(0):c1,j4294892318 l c0000000142ae100 c c0000000004db98c <7>schedule(2996) swapper(0):c1,j4294892318 p c00000000ffd7040 n c00000000ffd2040 r c0000000142ae100 <7>schedule(3002) swapper(0):c1,j4294892318 p c00000000ffd7040 n c00000000ffd2040 r c0000000142ae100 <7>schedule(3029) swapper(1):c0,j4294892318 p c00000000ffd7040 n c00000000ffd7810 r c0000000142a6100 t c0000000142a6100 <7>finish_task_switch(1539) swapper(1):c0,j4294892318 r c0000000142a6100 p c00000000ffd7040 <7>finish_lock_switch(297) swapper(1):c0,j4294892318 <7>_spin_unlock_irq(292) migration/1(5):c1,j4294892318 l c0000000142a6100 c c0000000004dbd24 <4>BUG: spinlock already unlocked on CPU#1, migration/1/5 schedule() line 2883 grabs the rq at c0000000142ae100 on cpu1, and calls spinlock. later in line 3029, it switches to cpu0 and rq becomes c0000000142a6100, this is after context_switch(), barrier(), finish_task_switch(), finish_lock_switch() For some reason, the _spin_unlock_irq() call from finish_lock_switch() happens on cpu1? But this should be no real problem, the real thing is, why changed rq from c0000000142ae100 to c0000000142a6100 between lines 3002 and 3029. -- http://gcc.gnu.org/bugzilla/show_bug.cgi?id=24644