From: Thomas Gleixner <[email protected]>

CONFIG_PREEMPT_COUNT is now unconditionally enabled and will be
removed. Cleanup the leftovers before doing so.

Signed-off-by: Thomas Gleixner <[email protected]>
Acked-by: Will Deacon <[email protected]>
Signed-off-by: Paul E. McKenney <[email protected]>
---
 include/linux/bit_spinlock.h | 4 +---
 1 file changed, 1 insertion(+), 3 deletions(-)

diff --git a/include/linux/bit_spinlock.h b/include/linux/bit_spinlock.h
index bbc4730..1e03d54 100644
--- a/include/linux/bit_spinlock.h
+++ b/include/linux/bit_spinlock.h
@@ -90,10 +90,8 @@ static inline int bit_spin_is_locked(int bitnum, unsigned 
long *addr)
 {
 #if defined(CONFIG_SMP) || defined(CONFIG_DEBUG_SPINLOCK)
        return test_bit(bitnum, addr);
-#elif defined CONFIG_PREEMPT_COUNT
-       return preempt_count();
 #else
-       return 1;
+       return preempt_count();
 #endif
 }
 
-- 
2.9.5

Reply via email to