etter
发送时间: 2019年5月21日 0:28
收件人: Pan, Xinhui
抄送: Deucher, Alexander; Koenig, Christian; Zhou, David(ChunMing);
airl...@linux.ie; dan...@ffwll.ch; Quan, Evan; xiaolinkui;
amd-...@lists.freedesktop.org; dri-de...@lists.freedesktop.org;
linux-kernel@vger.kernel.org
主题: Re: [PATCH] gpu: drm
% Change
--- ---
4 4053.3 Mop/s 4223.7 Mop/s +4.2%
8 3310.4 Mop/s 3406.0 Mop/s +2.9%
12 2576.4 Mop/s 2674.6 Mop/s +3.8%
Signed-off-by: Waiman Long
---
Works on my side :)
Reviewed-by: Pan Xinhui
v4->v5:
- Correct some grammati
在 2017/2/17 14:05, Michael Ellerman 写道:
Pan Xinhui writes:
diff --git a/arch/powerpc/xmon/xmon.c b/arch/powerpc/xmon/xmon.c
index 9c0e17c..f6e5c3d 100644
--- a/arch/powerpc/xmon/xmon.c
+++ b/arch/powerpc/xmon/xmon.c
@@ -76,6 +76,7 @@ static int xmon_gate;
#endif /* CONFIG_SMP */
static
在 2017/2/16 18:57, Guilherme G. Piccoli 写道:
On 16/02/2017 03:09, Michael Ellerman wrote:
Pan Xinhui writes:
Once xmon is triggered by sysrq-x, it is enabled always afterwards even
if it is disabled during boot. This will cause a system reset interrut
fail to dump. So keep xmon in its
Once xmon is triggered by sysrq-x, it is enabled always afterwards even
if it is disabled during boot. This will cause a system reset interrut
fail to dump. So keep xmon in its original state after exit.
Signed-off-by: Pan Xinhui
---
arch/powerpc/xmon/xmon.c | 5 -
1 file changed, 4
'x|X' exit commands. Turn xmon off if 'z'
is following.
Signed-off-by: Pan Xinhui
---
arch/powerpc/xmon/xmon.c | 12 +---
1 file changed, 9 insertions(+), 3 deletions(-)
diff --git a/arch/powerpc/xmon/xmon.c b/arch/powerpc/xmon/xmon.c
index 9c0e17c..2f4e7b1 100644
在 2017/2/8 14:09, Boqun Feng 写道:
On Wed, Feb 08, 2017 at 12:05:40PM +0800, Boqun Feng wrote:
On Wed, Feb 08, 2017 at 11:39:10AM +0800, Xinhui Pan wrote:
2016-12-26 4:26 GMT+08:00 Waiman Long :
A number of cmpxchg calls in qspinlock_paravirt.h were replaced by more
relaxed versions to improv
在 2017/2/8 14:09, Boqun Feng 写道:
On Wed, Feb 08, 2017 at 12:05:40PM +0800, Boqun Feng wrote:
On Wed, Feb 08, 2017 at 11:39:10AM +0800, Xinhui Pan wrote:
2016-12-26 4:26 GMT+08:00 Waiman Long :
A number of cmpxchg calls in qspinlock_paravirt.h were replaced by more
relaxed versions to improv
在 2017/2/8 14:09, Boqun Feng 写道:
On Wed, Feb 08, 2017 at 12:05:40PM +0800, Boqun Feng wrote:
On Wed, Feb 08, 2017 at 11:39:10AM +0800, Xinhui Pan wrote:
2016-12-26 4:26 GMT+08:00 Waiman Long :
A number of cmpxchg calls in qspinlock_paravirt.h were replaced by more
relaxed versions to improv
Commit-ID: 75437bb304b20a2b350b9a8e9f9238d5e24e12ba
Gitweb: http://git.kernel.org/tip/75437bb304b20a2b350b9a8e9f9238d5e24e12ba
Author: Pan Xinhui
AuthorDate: Tue, 10 Jan 2017 02:56:46 -0500
Committer: Ingo Molnar
CommitDate: Thu, 12 Jan 2017 09:35:57 +0100
locking/pvqspinlock: Don
If prev node is not in runnig state or its vCPU is preempted, we can give
up our vCPU slices ASAP in pv_wait_node. After commit d9345c65eb79
("sched/core: Introduce the vcpu_is_preempted(cpu) interface") kernel
has knowledge of one vCPU is running or not.
Signed-off-by: Pan Xinh
在 2017/1/4 17:41, Peter Zijlstra 写道:
On Tue, Jan 03, 2017 at 05:07:54PM -0500, Waiman Long wrote:
On 01/03/2017 11:18 AM, Peter Zijlstra wrote:
On Sun, Dec 25, 2016 at 03:26:01PM -0500, Waiman Long wrote:
A number of cmpxchg calls in qspinlock_paravirt.h were replaced by more
relaxed version
在 2017/1/5 16:23, Ingo Molnar 写道:
* Pan Xinhui wrote:
If prev node is not in runnig state or its cpu is preempted, we need
wait early in pv_wait_node. After commit "sched/core: Introduce the
vcpu_is_preempted(cpu) interface" kernel has knowledge of one vcpu is
running or not. S
hi, Andrea
thanks for your reply. :)
在 2016/12/19 19:42, Andrea Arcangeli 写道:
Hello,
On Wed, Nov 02, 2016 at 05:08:35AM -0400, Pan Xinhui wrote:
Support the vcpu_is_preempted() functionality under KVM. This will
enhance lock performance on overcommitted hosts (more runnable vcpus
than
rom d4fa3ea0b8b6f3e5ff511604a4a6665d1cbb74c3 Mon Sep 17 00:00:00 2001
From: Pan Xinhui
Date: Sat, 17 Dec 2016 02:56:33 -0500
Subject: [PATCH] kvm: fix compile issue
we revert commit 0b9f6c4615c993d2b552e0d2bd1ade49b56e5beb which calls
sleep function while preempt_disable on host part. But we remove str
在 2016/12/15 15:24, Jia He 写道:
This is to let bool variable could be correctly displayed in
big/little endian sysctl procfs. sizeof(bool) is arch dependent,
proc_dobool should work in all arches.
Suggested-by: Pan Xinhui
Signed-off-by: Jia He
---
include/linux/sysctl.h | 2 ++
kernel
endian system, the converting from/to bool to/from int will cause
error for proc items.
This patch use a new proc_handler proc_dobool to fixe it.
^^^fix^^^
Signed-off-by: Jia He
---
other than that is okay for me.
Reviewed-by: Pan Xinhui
在 2016/12/12 01:43, Pan Xinhui 写道:
hi, jia
nice catch!
However I think we should fix it totally.
This is because do_proc_dointvec_conv() try to get a int value from a bool *.
something like below might help. pls. ignore the code style and this is tested
在 2016/12/11 23:36, Jia He 写道:
nsm_use_hostnames is a module parameter and it will be exported to sysctl
procfs. This is to let user sometimes change it from userspace. But the
minimal unit for sysctl procfs read/write it sizeof(int).
In big endian system, the converting from/to bool to/from i
hi, jia
nice catch!
However I think we should fix it totally.
This is because do_proc_dointvec_conv() try to get a int value from a bool *.
something like below might help. pls. ignore the code style and this is tested
:)
diff --git a/fs/lockd/svc.c b/fs/lockd/svc.c
index fc4084e..7eea
hi, Peter
I think I know the point.
then could we just let __eax rettype(here is bool), not unsigned long?
I does not do tests for my thoughts.
@@ -461,7 +461,9 @@ int paravirt_disable_iospace(void);
#define PVOP_VCALL_ARGS
\
在 2016/12/7 03:14, Waiman Long 写道:
A number of cmpxchg calls in qspinlock_paravirt.h were replaced by more
relaxed versions to improve performance on architectures that use LL/SC.
Signed-off-by: Waiman Long
---
thanks!
I apply it on my tree. and the tests is okay.
ke
rnel/locking/qspinlo
If prev node is not in runnig state or its cpu is preempted, we need
wait early in pv_wait_node. After commit "sched/core: Introduce the
vcpu_is_preempted(cpu) interface" kernel has knowledge of one vcpu is
running or not. So lets use it.
Signed-off-by: Pan Xinhui
---
kern
in the hash table might not be the correct lock holder, as for
performace issue, we does not take care of hash conflict.
Also introduce spin_lock_holder, which tells who owns the lock now.
currently the only user is spin_unlock_wait.
Signed-off-by: Pan Xinhui
---
arch/powerpc/include/asm
1050.6
----
Pan Xinhui (6):
powerpc/qspinlock: powerpc support qspinlock
powerpc: platforms/Kconfig: Add qspinlock build config
powerpc: lib/locks.c: Add cpu yield/wake helper function
powerpc/pv-qspinlock: powerpc support pv-qspinlo
Avoid a function call under native version of qspinlock. On powerNV,
bafore applying this patch, every unlock is expensive. This small
optimizes enhance the performance.
We use static_key with jump_lable which removes unnecessary loads of
lppaca and its stuff.
Signed-off-by: Pan Xinhui
pSeries run as a guest and might need pv-qspinlock.
Signed-off-by: Pan Xinhui
---
arch/powerpc/kernel/Makefile | 1 +
arch/powerpc/platforms/pseries/Kconfig | 8
2 files changed, 9 insertions(+)
diff --git a/arch/powerpc/kernel/Makefile b/arch/powerpc/kernel/Makefile
index
endianness
system.
We override some arch_spin_XXX as powerpc has io_sync stuff which makes
sure the io operations are protected by the lock correctly.
There is another special case, see commit
2c610022711 ("locking/qspinlock: Fix spin_unlock_wait() some more")
Signed-off-by: Pan Xinhui
pSeries/powerNV will use qspinlock from now on.
Signed-off-by: Pan Xinhui
---
arch/powerpc/platforms/Kconfig | 9 +
1 file changed, 9 insertions(+)
diff --git a/arch/powerpc/platforms/Kconfig b/arch/powerpc/platforms/Kconfig
index fbdae83..3559bbf 100644
--- a/arch/powerpc/platforms
will introduce latency and a little overhead. And we
do NOT want to suffer any latency on some cases, e.g. in interrupt handler.
The second parameter *confer* can indicate such case.
__spin_wake_cpu is simpiler, it will wake up one vcpu regardless of its
current vcpu state.
Signed-off-by: Pan
在 2016/12/6 09:24, Pan Xinhui 写道:
在 2016/12/6 08:58, Boqun Feng 写道:
On Mon, Dec 05, 2016 at 10:19:22AM -0500, Pan Xinhui wrote:
pSeries/powerNV will use qspinlock from now on.
Signed-off-by: Pan Xinhui
---
arch/powerpc/platforms/pseries/Kconfig | 8
1 file changed, 8 insertions
在 2016/12/6 08:58, Boqun Feng 写道:
On Mon, Dec 05, 2016 at 10:19:22AM -0500, Pan Xinhui wrote:
pSeries/powerNV will use qspinlock from now on.
Signed-off-by: Pan Xinhui
---
arch/powerpc/platforms/pseries/Kconfig | 8
1 file changed, 8 insertions(+)
diff --git a/arch/powerpc
correct waiman's address.
在 2016/12/6 08:47, Boqun Feng 写道:
On Mon, Dec 05, 2016 at 10:19:21AM -0500, Pan Xinhui wrote:
This patch add basic code to enable qspinlock on powerpc. qspinlock is
one kind of fairlock implementation. And seen some performance improvement
under some scen
endianness
system.
We override some arch_spin_XXX as powerpc has io_sync stuff which makes
sure the io operations are protected by the lock correctly.
There is another special case, see commit
2c610022711 ("locking/qspinlock: Fix spin_unlock_wait() some more")
Signed-off-by: Pan Xinhui
pSeries/powerNV will use qspinlock from now on.
Signed-off-by: Pan Xinhui
---
arch/powerpc/platforms/pseries/Kconfig | 8
1 file changed, 8 insertions(+)
diff --git a/arch/powerpc/platforms/pseries/Kconfig
b/arch/powerpc/platforms/pseries/Kconfig
index bec90fb..8a87d06 100644
--- a
in the hash table might not be the correct lock holder, as for
performace issue, we does not take care of hash conflict.
Also introduce spin_lock_holder, which tells who owns the lock now.
currently the only user is spin_unlock_wait.
Signed-off-by: Pan Xinhui
---
arch/powerpc/include/asm
Avoid a function call under native version of qspinlock. On powerNV,
bafore applying this patch, every unlock is expensive. This small
optimizes enhance the performance.
We use static_key with jump_label which removes unnecessary loads of
lppaca and its stuff.
Signed-off-by: Pan Xinhui
1134.2
=
System Benchmarks Index Score 1072.0 1108.91050.6
--------
Pan Xinhui (6):
powerpc/qspinlock: powerpc support qspinlock
powerpc
pSeries run as a guest and might need pv-qspinlock.
Signed-off-by: Pan Xinhui
---
arch/powerpc/kernel/Makefile | 1 +
arch/powerpc/platforms/pseries/Kconfig | 8
2 files changed, 9 insertions(+)
diff --git a/arch/powerpc/kernel/Makefile b/arch/powerpc/kernel/Makefile
index
will introduce latency and a little overhead. And we
do NOT want to suffer any latency on some cases, e.g. in interrupt handler.
The second parameter *confer* can indicate such case.
__spin_wake_cpu is simpiler, it will wake up one vcpu regardless of its
current vcpu state.
Signed-off-by: Pan
在 2016/12/2 12:35, yjin 写道:
On 2016年12月02日 12:22, Balbir Singh wrote:
On Fri, Dec 2, 2016 at 3:15 PM, Michael Ellerman wrote:
yanjiang@windriver.com writes:
diff --git a/arch/powerpc/include/asm/cputime.h
b/arch/powerpc/include/asm/cputime.h
index 4f60db0..4423e97 100644
--- a/arch/p
Commit-ID: 0b9f6c4615c993d2b552e0d2bd1ade49b56e5beb
Gitweb: http://git.kernel.org/tip/0b9f6c4615c993d2b552e0d2bd1ade49b56e5beb
Author: Pan Xinhui
AuthorDate: Wed, 2 Nov 2016 05:08:35 -0400
Committer: Ingo Molnar
CommitDate: Tue, 22 Nov 2016 12:48:08 +0100
x86/kvm: Support the vCPU
Commit-ID: 1885aa7041c9e801e5d5b093b9dad38937ca37f6
Gitweb: http://git.kernel.org/tip/1885aa7041c9e801e5d5b093b9dad38937ca37f6
Author: Pan Xinhui
AuthorDate: Wed, 2 Nov 2016 05:08:36 -0400
Committer: Ingo Molnar
CommitDate: Tue, 22 Nov 2016 12:48:08 +0100
x86/kvm: Support the vCPU
Commit-ID: 05ffc951392df57edecc2519327b169210c3df75
Gitweb: http://git.kernel.org/tip/05ffc951392df57edecc2519327b169210c3df75
Author: Pan Xinhui
AuthorDate: Wed, 2 Nov 2016 05:08:30 -0400
Committer: Ingo Molnar
CommitDate: Tue, 22 Nov 2016 12:48:10 +0100
locking/mutex: Break out of
Commit-ID: 3dd3e0ce7989b645eee0174b17f5095e187c7f28
Gitweb: http://git.kernel.org/tip/3dd3e0ce7989b645eee0174b17f5095e187c7f28
Author: Pan Xinhui
AuthorDate: Wed, 2 Nov 2016 05:08:38 -0400
Committer: Ingo Molnar
CommitDate: Tue, 22 Nov 2016 12:48:09 +0100
Documentation/virtual/kvm
Commit-ID: 5aff60a191e579ae00ae5ca6ce16c13b687bc8a3
Gitweb: http://git.kernel.org/tip/5aff60a191e579ae00ae5ca6ce16c13b687bc8a3
Author: Pan Xinhui
AuthorDate: Wed, 2 Nov 2016 05:08:29 -0400
Committer: Ingo Molnar
CommitDate: Tue, 22 Nov 2016 12:48:10 +0100
locking/osq: Break out of
Commit-ID: 4ec6e863625625a54f527464ab91ce1a1cb16c42
Gitweb: http://git.kernel.org/tip/4ec6e863625625a54f527464ab91ce1a1cb16c42
Author: Pan Xinhui
AuthorDate: Wed, 2 Nov 2016 05:08:34 -0400
Committer: Ingo Molnar
CommitDate: Tue, 22 Nov 2016 12:48:07 +0100
kvm: Introduce
Commit-ID: 446f3dc8cc0af59259c6c8b898726fae7ed2c055
Gitweb: http://git.kernel.org/tip/446f3dc8cc0af59259c6c8b898726fae7ed2c055
Author: Pan Xinhui
AuthorDate: Wed, 2 Nov 2016 05:08:33 -0400
Committer: Ingo Molnar
CommitDate: Tue, 22 Nov 2016 12:48:07 +0100
locking/core, x86/paravirt
Commit-ID: d9345c65eb7930ac6755cf593ee7686f4029ccf4
Gitweb: http://git.kernel.org/tip/d9345c65eb7930ac6755cf593ee7686f4029ccf4
Author: Pan Xinhui
AuthorDate: Wed, 2 Nov 2016 05:08:28 -0400
Committer: Ingo Molnar
CommitDate: Tue, 22 Nov 2016 12:48:05 +0100
sched/core: Introduce the
Commit-ID: 41946c86876ea6a3e8857182356e6d76dbfe7fb6
Gitweb: http://git.kernel.org/tip/41946c86876ea6a3e8857182356e6d76dbfe7fb6
Author: Pan Xinhui
AuthorDate: Wed, 2 Nov 2016 05:08:31 -0400
Committer: Ingo Molnar
CommitDate: Tue, 22 Nov 2016 12:48:06 +0100
locking/core, powerpc
在 2016/11/16 18:23, Peter Zijlstra 写道:
On Wed, Nov 16, 2016 at 12:19:09PM +0800, Pan Xinhui wrote:
Hi, Peter.
I think we can avoid a function call in a simpler way. How about below
static inline bool vcpu_is_preempted(int cpu)
{
/* only set in pv case*/
if
在 2016/11/15 23:47, Peter Zijlstra 写道:
On Wed, Nov 02, 2016 at 05:08:33AM -0400, Pan Xinhui wrote:
diff --git a/arch/x86/include/asm/paravirt_types.h
b/arch/x86/include/asm/paravirt_types.h
index 0f400c0..38c3bb7 100644
--- a/arch/x86/include/asm/paravirt_types.h
+++ b/arch/x86/include/asm
quick test (4 vcpus on 1 physical cpu doing a parallel build job
with "make -j 8") reduced system time by about 5% with this patch.
Signed-off-by: Juergen Gross
Signed-off-by: Pan Xinhui
---
arch/x86/xen/spinlock.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/ar
u has been preempted.
Signed-off-by: Pan Xinhui
Acked-by: Radim Krčmář
Acked-by: Paolo Bonzini
---
Documentation/virtual/kvm/msr.txt | 9 -
1 file changed, 8 insertions(+), 1 deletion(-)
diff --git a/Documentation/virtual/kvm/msr.txt
b/Documentation/virtual/kvm/msr.txt
index 2a71c8f..
kvm_steal_time ::preempted to indicate that if
one vcpu is running or not.
Signed-off-by: Pan Xinhui
Acked-by: Paolo Bonzini
---
arch/x86/include/uapi/asm/kvm_para.h | 4 +++-
arch/x86/kvm/x86.c | 16
2 files changed, 19 insertions(+), 1 deletion(-)
diff --git a
concurrent) | 3531.4 lpm | 3211.9 lpm
System Call Overhead | 10385653.0 lps | 10419979.0 lps
Signed-off-by: Pan Xinhui
Acked-by: Paolo Bonzini
---
arch/x86/kernel/kvm.c | 12
1 file changed, 12 insertions(+)
diff --git a/arch/x86/kernel/kvm.c
spin loops upon the retval of
vcpu_is_preempted.
As kernel has used this interface, So lets support it.
To deal with kernel and kvm/xen, add vcpu_is_preempted into struct
pv_lock_ops.
Then kvm or xen could provide their own implementation to support
vcpu_is_preempted.
Signed-off-by: Pan Xinhui
.
Suggested-by: Peter Zijlstra (Intel)
Signed-off-by: Pan Xinhui
Acked-by: Christian Borntraeger
Acked-by: Paolo Bonzini
Tested-by: Juergen Gross
---
include/linux/sched.h | 12
1 file changed, 12 insertions(+)
diff --git a/include/linux/sched.h b/include/linux/sched.h
index
->yiled_count keeps zero on
PowerNV. So we can just skip the machine type check.
Suggested-by: Boqun Feng
Suggested-by: Peter Zijlstra (Intel)
Signed-off-by: Pan Xinhui
---
arch/powerpc/include/asm/spinlock.h | 8
1 file changed, 8 insertions(+)
diff --git a/arch/powerpc/include/asm/spinloc
call_common
2.83% sched-messaging [kernel.vmlinux] [k] copypage_power7
2.64% sched-messaging [kernel.vmlinux] [k] rwsem_spin_on_owner
2.00% sched-messaging [kernel.vmlinux] [k] osq_lock
Suggested-by: Boqun Feng
Signed-off-by: Pan Xinhui
Acked-by: Christian Borntraeger
Acked-by: Pa
It allows us to update some status or field of one struct partially.
We can also save one kvm_read_guest_cached if we just update one filed
of the struct regardless of its current value.
Signed-off-by: Pan Xinhui
Acked-by: Paolo Bonzini
---
include/linux/kvm_host.h | 2 ++
virt/kvm
From: Christian Borntraeger
this implements the s390 backend for commit
"kernel/sched: introduce vcpu preempted check interface"
by reworking the existing smp_vcpu_scheduled into
arch_vcpu_is_preempted. We can then also get rid of the
local cpu_is_preempted function by moving the
CIF_ENABLED_WAIT
essaging [kernel.vmlinux] [k] system_call
2.69% sched-messaging [kernel.vmlinux] [k] wait_consider_task
Signed-off-by: Pan Xinhui
Acked-by: Christian Borntraeger
Acked-by: Paolo Bonzini
Tested-by: Juergen Gross
---
kernel/locking/mutex.c | 13 +++--
kernel/locking/rwsem-xad
System Call Overhead | 10385653.0 lps | 10419979.0 lps
Christian Borntraeger (1):
s390/spinlock: Provide vcpu_is_preempted
Juergen Gross (1):
x86, xen: support vcpu preempted check
Pan Xinhui (9):
kernel/sched: introduce vcpu preempted check interface
locking/
在 2016/10/29 03:38, Konrad Rzeszutek Wilk 写道:
On Fri, Oct 28, 2016 at 04:11:16AM -0400, Pan Xinhui wrote:
change from v5:
spilt x86/kvm patch into guest/host part.
introduce kvm_write_guest_offset_cached.
fix some typos.
rebase patch onto 4.9.2
change from v4
From: Christian Borntraeger
this implements the s390 backend for commit
"kernel/sched: introduce vcpu preempted check interface"
by reworking the existing smp_vcpu_scheduled into
arch_vcpu_is_preempted. We can then also get rid of the
local cpu_is_preempted function by moving the
CIF_ENABLED_WAIT
It allows us to update some status or field of one struct partially.
We can also save one kvm_read_guest_cached if we just update one filed
of the struct regardless of its current value.
Signed-off-by: Pan Xinhui
---
include/linux/kvm_host.h | 2 ++
virt/kvm/kvm_main.c | 20
essaging [kernel.vmlinux] [k] system_call
2.69% sched-messaging [kernel.vmlinux] [k] wait_consider_task
Signed-off-by: Pan Xinhui
Acked-by: Christian Borntraeger
Tested-by: Juergen Gross
---
kernel/locking/mutex.c | 15 +--
kernel/locking/rwsem-xadd.c | 16 +---
spin loops upon on the retval of
vcpu_is_preempted.
As kernel has used this interface, So lets support it.
To deal with kernel and kvm/xen, add vcpu_is_preempted into struct
pv_lock_ops.
Then kvm or xen could provide their own implementation to support
vcpu_is_preempted.
Signed-off-by: Pan
quick test (4 vcpus on 1 physical cpu doing a parallel build job
with "make -j 8") reduced system time by about 5% with this patch.
Signed-off-by: Juergen Gross
Signed-off-by: Pan Xinhui
---
arch/x86/xen/spinlock.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/ar
kvm_steal_time ::preempted to indicate that if
one vcpu is running or not.
Signed-off-by: Pan Xinhui
---
arch/x86/include/uapi/asm/kvm_para.h | 4 +++-
arch/x86/kvm/x86.c | 16
2 files changed, 19 insertions(+), 1 deletion(-)
diff --git a/arch/x86/include/uapi
->yiled_count keeps zero on
powerNV. So we can just skip the machine type check.
Suggested-by: Boqun Feng
Suggested-by: Peter Zijlstra (Intel)
Signed-off-by: Pan Xinhui
---
arch/powerpc/include/asm/spinlock.h | 8
1 file changed, 8 insertions(+)
diff --git a/arch/powerpc/include/
u has been preempted.
Signed-off-by: Pan Xinhui
Acked-by: Radim Krčmář
---
Documentation/virtual/kvm/msr.txt | 9 -
1 file changed, 8 insertions(+), 1 deletion(-)
diff --git a/Documentation/virtual/kvm/msr.txt
b/Documentation/virtual/kvm/msr.txt
index 2a71c8f..ab2ab76 100644
--- a/Docum
call_common
2.83% sched-messaging [kernel.vmlinux] [k] copypage_power7
2.64% sched-messaging [kernel.vmlinux] [k] rwsem_spin_on_owner
2.00% sched-messaging [kernel.vmlinux] [k] osq_lock
Suggested-by: Boqun Feng
Signed-off-by: Pan Xinhui
Acked-by: Christian Borntraeger
Tested-by: Ju
concurrent) | 3531.4 lpm | 3211.9 lpm
System Call Overhead | 10385653.0 lps | 10419979.0 lps
Signed-off-by: Pan Xinhui
---
arch/x86/kernel/kvm.c | 12
1 file changed, 12 insertions(+)
diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
ind
.
Suggested-by: Peter Zijlstra (Intel)
Signed-off-by: Pan Xinhui
Acked-by: Christian Borntraeger
Tested-by: Juergen Gross
---
include/linux/sched.h | 12
1 file changed, 12 insertions(+)
diff --git a/include/linux/sched.h b/include/linux/sched.h
index 348f51b..44c1ce7 100644
0419979.0 lps
Christian Borntraeger (1):
s390/spinlock: Provide vcpu_is_preempted
Juergen Gross (1):
x86, xen: support vcpu preempted check
Pan Xinhui (9):
kernel/sched: introduce vcpu preempted check interface
locking/osq: Drop the overload of osq_lock()
kernel/locking: Drop the overl
在 2016/10/24 23:18, Paolo Bonzini 写道:
On 24/10/2016 17:14, Radim Krčmář wrote:
2016-10-24 16:39+0200, Paolo Bonzini:
On 19/10/2016 19:24, Radim Krčmář wrote:
+ if (vcpu->arch.st.msr_val & KVM_MSR_ENABLED)
+ if (kvm_read_guest_cached(vcpu->kvm, &vcpu->arch.st.stime,
+
From: Christian Borntraeger
this implements the s390 backend for commit
"kernel/sched: introduce vcpu preempted check interface"
by reworking the existing smp_vcpu_scheduled into
arch_vcpu_is_preempted. We can then also get rid of the
local cpu_is_preempted function by moving the
CIF_ENABLED_WAIT
n preempted.
Signed-off-by: Pan Xinhui
---
Documentation/virtual/kvm/msr.txt | 8 +++-
1 file changed, 7 insertions(+), 1 deletion(-)
diff --git a/Documentation/virtual/kvm/msr.txt
b/Documentation/virtual/kvm/msr.txt
index 2a71c8f..3376f13 100644
--- a/Documentation/virtual/kvm/msr.txt
call_common
2.83% sched-messaging [kernel.vmlinux] [k] copypage_power7
2.64% sched-messaging [kernel.vmlinux] [k] rwsem_spin_on_owner
2.00% sched-messaging [kernel.vmlinux] [k] osq_lock
Suggested-by: Boqun Feng
Signed-off-by: Pan Xinhui
Acked-by: Christian Borntraeger
Tested-by: Ju
quick test (4 vcpus on 1 physical cpu doing a parallel build job
with "make -j 8") reduced system time by about 5% with this patch.
Signed-off-by: Juergen Gross
Signed-off-by: Pan Xinhui
---
arch/x86/xen/spinlock.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/ar
Call Overhead | 10385653.0 lps | 10419979.0 lps
Signed-off-by: Pan Xinhui
---
arch/x86/include/uapi/asm/kvm_para.h | 3 ++-
arch/x86/kernel/kvm.c| 12
arch/x86/kvm/x86.c | 18 ++
3 files changed, 32 insertions(+), 1
essaging [kernel.vmlinux] [k] system_call
2.69% sched-messaging [kernel.vmlinux] [k] wait_consider_task
Signed-off-by: Pan Xinhui
Acked-by: Christian Borntraeger
Tested-by: Juergen Gross
---
kernel/locking/mutex.c | 15 +--
kernel/locking/rwsem-xadd.c | 16 +---
.
Suggested-by: Peter Zijlstra (Intel)
Signed-off-by: Pan Xinhui
Acked-by: Christian Borntraeger
Tested-by: Juergen Gross
---
include/linux/sched.h | 12
1 file changed, 12 insertions(+)
diff --git a/include/linux/sched.h b/include/linux/sched.h
index 348f51b..44c1ce7 100644
ll Scripts (8 concurrent) | 3531.4 lpm | 3211.9 lpm
System Call Overhead | 10385653.0 lps | 10419979.0 lps
Christian Borntraeger (1):
s390/spinlock: Provide vcpu_is_preempted
Juergen Gross (1):
x86, xen: support vcpu preempted check
Pan Xinhui (7):
kernel/sch
spin loops upon on the retval of
vcpu_is_preempted.
As kernel has used this interface, So lets support it.
To deal with kernel and kvm/xen, add vcpu_is_preempted into struct
pv_lock_ops.
Then kvm or xen could provide their own implementation to support
vcpu_is_preempted.
Signed-off-by: Pan
->yiled_count keeps zero on
powerNV. So we can just skip the machine type check.
Suggested-by: Boqun Feng
Suggested-by: Peter Zijlstra (Intel)
Signed-off-by: Pan Xinhui
---
arch/powerpc/include/asm/spinlock.h | 8
1 file changed, 8 insertions(+)
diff --git a/arch/powerpc/include/
在 2016/10/20 01:24, Radim Krčmář 写道:
2016-10-19 06:20-0400, Pan Xinhui:
This is to fix some lock holder preemption issues. Some other locks
implementation do a spin loop before acquiring the lock itself.
Currently kernel has an interface of bool vcpu_is_preempted(int cpu). It
takes the cpu as
在 2016/10/19 23:58, Juergen Gross 写道:
On 19/10/16 12:20, Pan Xinhui wrote:
change from v3:
add x86 vcpu preempted check patch
change from v2:
no code change, fix typos, update some comments
change from v1:
a simplier definition of default vcpu_is_preempted
skip
->yiled_count keeps zero on
powerNV. So we can just skip the machine type check.
Suggested-by: Boqun Feng
Suggested-by: Peter Zijlstra (Intel)
Signed-off-by: Pan Xinhui
---
arch/powerpc/include/asm/spinlock.h | 8
1 file changed, 8 insertions(+)
diff --git a/arch/powerpc/include/
Scripts (1 concurrent) |23224.3 lpm |22607.4 lpm
Shell Scripts (8 concurrent) | 3531.4 lpm | 3211.9 lpm
System Call Overhead | 10385653.0 lps | 10419979.0 lps
Signed-off-by: Pan Xinhui
---
arch/x86/include/asm/paravirt_types.h | 6
essaging [kernel.vmlinux] [k] system_call
2.69% sched-messaging [kernel.vmlinux] [k] wait_consider_task
Signed-off-by: Pan Xinhui
---
kernel/locking/mutex.c | 15 +--
kernel/locking/rwsem-xadd.c | 16 +---
2 files changed, 26 insertions(+), 5 deletions(-)
diff --git
(1 concurrent) |23224.3 lpm |22607.4 lpm
Shell Scripts (8 concurrent) | 3531.4 lpm | 3211.9 lpm
System Call Overhead | 10385653.0 lps | 10419979.0 lps
Pan Xinhui (5):
kernel/sched: introduce vcpu preempted check interface
locking/osq: Drop
call_common
2.83% sched-messaging [kernel.vmlinux] [k] copypage_power7
2.64% sched-messaging [kernel.vmlinux] [k] rwsem_spin_on_owner
2.00% sched-messaging [kernel.vmlinux] [k] osq_lock
Suggested-by: Boqun Feng
Signed-off-by: Pan Xinhui
---
kernel/locking/osq_lock.c | 10 +-
1 f
.
Suggested-by: Peter Zijlstra (Intel)
Signed-off-by: Pan Xinhui
---
include/linux/sched.h | 12
1 file changed, 12 insertions(+)
diff --git a/include/linux/sched.h b/include/linux/sched.h
index 348f51b..44c1ce7 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
orrect cpu number.
Signed-off-by: Pan Xinhui
---
tools/perf/bench/futex-hash.c | 2 +-
tools/perf/bench/futex-lock-pi.c | 2 +-
tools/perf/bench/futex-requeue.c | 2 +-
tools/perf/bench/futex-wake-parallel.c | 2 +-
tools/perf/bench/futex-wake.c | 2 +-
tools/
在 2016/9/30 17:08, Paolo Bonzini 写道:
On 30/09/2016 10:52, Pan Xinhui wrote:
x86 has no hypervisor support, and I'd like to understand the desired
semantics first, so I don't think it should block this series. In
Once a guest do a hypercall or something similar, IOW,
hi, Paolo
thanks for your reply.
在 2016/9/30 14:58, Paolo Bonzini 写道:
Please consider s390 and (x86/arm) KVM. Once we have a few, more can
follow later, but I think its important to not only have PPC support for
this.
Actually the s390 preemted check via sigp sense running is availab
在 2016/9/30 13:52, Boqun Feng 写道:
On Fri, Sep 30, 2016 at 12:49:52PM +0800, Pan Xinhui wrote:
在 2016/9/29 23:51, Christian Borntraeger 写道:
this implements the s390 backend for commit
"kernel/sched: introduce vcpu preempted check interface"
by reworking the existing smp_vcpu_sche
1 - 100 of 323 matches
Mail list logo