Commit-ID:  05ffc951392df57edecc2519327b169210c3df75
Gitweb:     http://git.kernel.org/tip/05ffc951392df57edecc2519327b169210c3df75
Author:     Pan Xinhui <xinhui....@linux.vnet.ibm.com>
AuthorDate: Wed, 2 Nov 2016 05:08:30 -0400
Committer:  Ingo Molnar <mi...@kernel.org>
CommitDate: Tue, 22 Nov 2016 12:48:10 +0100

locking/mutex: Break out of expensive busy-loop on 
{mutex,rwsem}_spin_on_owner() when owner vCPU is preempted

An over-committed guest with more vCPUs than pCPUs has a heavy overload
in the two spin_on_owner. This blames on the lock holder preemption
issue.

Break out of the loop if the vCPU is preempted: if vcpu_is_preempted(cpu)
is true.

test-case:
perf record -a perf bench sched messaging -g 400 -p && perf report

before patch:
20.68%  sched-messaging  [kernel.vmlinux]  [k] mutex_spin_on_owner
 8.45%  sched-messaging  [kernel.vmlinux]  [k] mutex_unlock
 4.12%  sched-messaging  [kernel.vmlinux]  [k] system_call
 3.01%  sched-messaging  [kernel.vmlinux]  [k] system_call_common
 2.83%  sched-messaging  [kernel.vmlinux]  [k] copypage_power7
 2.64%  sched-messaging  [kernel.vmlinux]  [k] rwsem_spin_on_owner
 2.00%  sched-messaging  [kernel.vmlinux]  [k] osq_lock

after patch:
 9.99%  sched-messaging  [kernel.vmlinux]  [k] mutex_unlock
 5.28%  sched-messaging  [unknown]         [H] 0xc0000000000768e0
 4.27%  sched-messaging  [kernel.vmlinux]  [k] __copy_tofrom_user_power7
 3.77%  sched-messaging  [kernel.vmlinux]  [k] copypage_power7
 3.24%  sched-messaging  [kernel.vmlinux]  [k] _raw_write_lock_irq
 3.02%  sched-messaging  [kernel.vmlinux]  [k] system_call
 2.69%  sched-messaging  [kernel.vmlinux]  [k] wait_consider_task

Tested-by: Juergen Gross <jgr...@suse.com>
Signed-off-by: Pan Xinhui <xinhui....@linux.vnet.ibm.com>
Signed-off-by: Peter Zijlstra (Intel) <pet...@infradead.org>
Acked-by: Christian Borntraeger <borntrae...@de.ibm.com>
Acked-by: Paolo Bonzini <pbonz...@redhat.com>
Cc: david.lai...@aculab.com
Cc: Linus Torvalds <torva...@linux-foundation.org>
Cc: Peter Zijlstra <pet...@infradead.org>
Cc: Thomas Gleixner <t...@linutronix.de>
Cc: b...@kernel.crashing.org
Cc: boqun.f...@gmail.com
Cc: bsinghar...@gmail.com
Cc: d...@stgolabs.net
Cc: kernel...@gmail.com
Cc: konrad.w...@oracle.com
Cc: linuxppc-...@lists.ozlabs.org
Cc: m...@ellerman.id.au
Cc: paul...@linux.vnet.ibm.com
Cc: pau...@samba.org
Cc: rkrc...@redhat.com
Cc: virtualizat...@lists.linux-foundation.org
Cc: will.dea...@arm.com
Cc: xen-devel-requ...@lists.xenproject.org
Cc: xen-de...@lists.xenproject.org
Link: 
http://lkml.kernel.org/r/1478077718-37424-4-git-send-email-xinhui....@linux.vnet.ibm.com
Signed-off-by: Ingo Molnar <mi...@kernel.org>
---
 kernel/locking/mutex.c      | 13 +++++++++++--
 kernel/locking/rwsem-xadd.c | 14 +++++++++++---
 2 files changed, 22 insertions(+), 5 deletions(-)

diff --git a/kernel/locking/mutex.c b/kernel/locking/mutex.c
index c073168..9b34961 100644
--- a/kernel/locking/mutex.c
+++ b/kernel/locking/mutex.c
@@ -364,7 +364,11 @@ bool mutex_spin_on_owner(struct mutex *lock, struct 
task_struct *owner)
                 */
                barrier();
 
-               if (!owner->on_cpu || need_resched()) {
+               /*
+                * Use vcpu_is_preempted to detect lock holder preemption issue.
+                */
+               if (!owner->on_cpu || need_resched() ||
+                               vcpu_is_preempted(task_cpu(owner))) {
                        ret = false;
                        break;
                }
@@ -389,8 +393,13 @@ static inline int mutex_can_spin_on_owner(struct mutex 
*lock)
 
        rcu_read_lock();
        owner = __mutex_owner(lock);
+
+       /*
+        * As lock holder preemption issue, we both skip spinning if task is not
+        * on cpu or its cpu is preempted
+        */
        if (owner)
-               retval = owner->on_cpu;
+               retval = owner->on_cpu && !vcpu_is_preempted(task_cpu(owner));
        rcu_read_unlock();
 
        /*
diff --git a/kernel/locking/rwsem-xadd.c b/kernel/locking/rwsem-xadd.c
index 263e744..6315060 100644
--- a/kernel/locking/rwsem-xadd.c
+++ b/kernel/locking/rwsem-xadd.c
@@ -336,7 +336,11 @@ static inline bool rwsem_can_spin_on_owner(struct 
rw_semaphore *sem)
                goto done;
        }
 
-       ret = owner->on_cpu;
+       /*
+        * As lock holder preemption issue, we both skip spinning if task is not
+        * on cpu or its cpu is preempted
+        */
+       ret = owner->on_cpu && !vcpu_is_preempted(task_cpu(owner));
 done:
        rcu_read_unlock();
        return ret;
@@ -362,8 +366,12 @@ static noinline bool rwsem_spin_on_owner(struct 
rw_semaphore *sem)
                 */
                barrier();
 
-               /* abort spinning when need_resched or owner is not running */
-               if (!owner->on_cpu || need_resched()) {
+               /*
+                * abort spinning when need_resched or owner is not running or
+                * owner's cpu is preempted.
+                */
+               if (!owner->on_cpu || need_resched() ||
+                               vcpu_is_preempted(task_cpu(owner))) {
                        rcu_read_unlock();
                        return false;
                }

Reply via email to