loops_per_jiffy is larger than expectation that possible
causes one thread can not obtain the spin lock for a long time.
So use cpu_clock() to reach timeout in one second which can
avoid HARD LOCKUP.

Signed-off-by: Chuansheng Liu <chuansheng....@intel.com>
Signed-off-by: xiaoming wang <xiaoming.w...@intel.com>
---
 kernel/locking/spinlock_debug.c |    8 +++++++-
 1 files changed, 7 insertions(+), 1 deletions(-)

diff --git a/kernel/locking/spinlock_debug.c b/kernel/locking/spinlock_debug.c
index 0374a59..5d3c4f3 100644
--- a/kernel/locking/spinlock_debug.c
+++ b/kernel/locking/spinlock_debug.c
@@ -105,13 +105,19 @@ static inline void debug_spin_unlock(raw_spinlock_t *lock)
 
 static void __spin_lock_debug(raw_spinlock_t *lock)
 {
-       u64 i;
+       u64 i, t;
        u64 loops = loops_per_jiffy * HZ;
+       u64 one_second = 1000000000;
+       u32 this_cpu = raw_smp_processor_id();
+
+       t = cpu_clock(this_cpu);
 
        for (i = 0; i < loops; i++) {
                if (arch_spin_trylock(&lock->raw_lock))
                        return;
                __delay(1);
+               if (cpu_clock(this_cpu) - t > one_second)
+                       break;
        }
        /* lockup suspected: */
        spin_dump(lock, "lockup suspected");
-- 
1.7.1


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to