queued_spin_lock_slowpath should not worry about another
queued_spin_lock_slowpath which runs in interrupt changes node->count by
accident because node->count keeps the same value everytime we
enter/leave queued_spin_lock_slowpath.

On some archs this_cpu_dec will save/restore irq flags, this will give a
heavy work load. Lets use __this_cpu_dec instead.

Signed-off-by: Pan Xinhui <xinhui....@linux.vnet.ibm.com>
---
change from v1:
        replace this_cpu_ptr with __this_cpu_dec
---
 kernel/locking/qspinlock.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c
index 99f31e4..9fd1a1e 100644
--- a/kernel/locking/qspinlock.c
+++ b/kernel/locking/qspinlock.c
@@ -492,7 +492,7 @@ release:
        /*
         * release the node
         */
-       this_cpu_dec(mcs_nodes[0].count);
+       __this_cpu_dec(mcs_nodes[0].count);
 }
 EXPORT_SYMBOL(queued_spin_lock_slowpath);
 
-- 
1.9.1

Reply via email to