queued_spin_lock_slowpath should not worry about interrupt change
node->count by accident because ->count is inc and dec when we
enter/leave queued_spin_lock_slowpath.

So this_cpu_dec() does some no point things here, lets use this_cpu_ptr
for a small optimization.

Signed-off-by: Pan Xinhui <xinhui....@linux.vnet.ibm.com>
---
 kernel/locking/qspinlock.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c
index 99f31e4..2b4daac 100644
--- a/kernel/locking/qspinlock.c
+++ b/kernel/locking/qspinlock.c
@@ -492,7 +492,7 @@ release:
        /*
         * release the node
         */
-       this_cpu_dec(mcs_nodes[0].count);
+       this_cpu_ptr(&mcs_nodes[0])->count--;
 }
 EXPORT_SYMBOL(queued_spin_lock_slowpath);
 
-- 
1.9.1

Reply via email to