pSeries/powerNV will use qspinlock from now on.
Signed-off-by: Pan Xinhui
---
arch/powerpc/platforms/Kconfig | 9 +
1 file changed, 9 insertions(+)
diff --git a/arch/powerpc/platforms/Kconfig b/arch/powerpc/platforms/Kconfig
index fbdae83..3559bbf 100644
--- a/arch/powerpc/platforms
will introduce latency and a little overhead. And we
do NOT want to suffer any latency on some cases, e.g. in interrupt handler.
The second parameter *confer* can indicate such case.
__spin_wake_cpu is simpiler, it will wake up one vcpu regardless of its
current vcpu state.
Signed-off-by: Pan
endianness
system.
We override some arch_spin_XXX as powerpc has io_sync stuff which makes
sure the io operations are protected by the lock correctly.
There is another special case, see commit
2c610022711 ("locking/qspinlock: Fix spin_unlock_wait() some more")
Signed-off-by: Pan Xinhui
pSeries run as a guest and might need pv-qspinlock.
Signed-off-by: Pan Xinhui
---
arch/powerpc/kernel/Makefile | 1 +
arch/powerpc/platforms/pseries/Kconfig | 8
2 files changed, 9 insertions(+)
diff --git a/arch/powerpc/kernel/Makefile b/arch/powerpc/kernel/Makefile
index
1050.6
----
Pan Xinhui (6):
powerpc/qspinlock: powerpc support qspinlock
powerpc: platforms/Kconfig: Add qspinlock build config
powerpc: lib/locks.c: Add cpu yield/wake helper function
powerpc/pv-qspinlock: powerpc support pv-qspinlo
Avoid a function call under native version of qspinlock. On powerNV,
bafore applying this patch, every unlock is expensive. This small
optimizes enhance the performance.
We use static_key with jump_lable which removes unnecessary loads of
lppaca and its stuff.
Signed-off-by: Pan Xinhui
in the hash table might not be the correct lock holder, as for
performace issue, we does not take care of hash conflict.
Also introduce spin_lock_holder, which tells who owns the lock now.
currently the only user is spin_unlock_wait.
Signed-off-by: Pan Xinhui
---
arch/powerpc/include/asm
If prev node is not in runnig state or its cpu is preempted, we need
wait early in pv_wait_node. After commit "sched/core: Introduce the
vcpu_is_preempted(cpu) interface" kernel has knowledge of one vcpu is
running or not. So lets use it.
Signed-off-by: Pan Xinhui
---
kern
在 2016/12/7 03:14, Waiman Long 写道:
A number of cmpxchg calls in qspinlock_paravirt.h were replaced by more
relaxed versions to improve performance on architectures that use LL/SC.
Signed-off-by: Waiman Long
---
thanks!
I apply it on my tree. and the tests is okay.
ke
hi, Peter
I think I know the point.
then could we just let __eax rettype(here is bool), not unsigned long?
I does not do tests for my thoughts.
@@ -461,7 +461,9 @@ int paravirt_disable_iospace(void);
#define PVOP_VCALL_ARGS
\
在 2016/10/24 23:18, Paolo Bonzini 写道:
On 24/10/2016 17:14, Radim Krčmář wrote:
2016-10-24 16:39+0200, Paolo Bonzini:
On 19/10/2016 19:24, Radim Krčmář wrote:
+ if (vcpu->arch.st.msr_val & KVM_MSR_ENABLED)
+ if (kvm_read_guest_cached(vcpu->kvm, >arch.st.stime,
+
.
Suggested-by: Peter Zijlstra (Intel)
Signed-off-by: Pan Xinhui
Acked-by: Christian Borntraeger
Tested-by: Juergen Gross
---
include/linux/sched.h | 12
1 file changed, 12 insertions(+)
diff --git a/include/linux/sched.h b/include/linux/sched.h
index 348f51b..44c1ce7 100644
79.0 lps
Christian Borntraeger (1):
s390/spinlock: Provide vcpu_is_preempted
Juergen Gross (1):
x86, xen: support vcpu preempted check
Pan Xinhui (9):
kernel/sched: introduce vcpu preempted check interface
locking/osq: Drop the overload of osq_lock()
kernel/locking: Drop the overload o
call_common
2.83% sched-messaging [kernel.vmlinux] [k] copypage_power7
2.64% sched-messaging [kernel.vmlinux] [k] rwsem_spin_on_owner
2.00% sched-messaging [kernel.vmlinux] [k] osq_lock
Suggested-by: Boqun Feng
Signed-off-by: Pan Xinhui
Acked-by: Christian Borntraeger
Tested-by: Ju
concurrent) | 3531.4 lpm | 3211.9 lpm
System Call Overhead | 10385653.0 lps | 10419979.0 lps
Signed-off-by: Pan Xinhui
---
arch/x86/kernel/kvm.c | 12
1 file changed, 12 insertions(+)
diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
ind
.
A quick test (4 vcpus on 1 physical cpu doing a parallel build job
with "make -j 8") reduced system time by about 5% with this patch.
Signed-off-by: Juergen Gross
Signed-off-by: Pan Xinhui
---
arch/x86/xen/spinlock.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/ar
kvm_steal_time ::preempted to indicate that if
one vcpu is running or not.
Signed-off-by: Pan Xinhui
---
arch/x86/include/uapi/asm/kvm_para.h | 4 +++-
arch/x86/kvm/x86.c | 16
2 files changed, 19 insertions(+), 1 deletion(-)
diff --git a/arch/x86/include/uapi
->yiled_count keeps zero on
powerNV. So we can just skip the machine type check.
Suggested-by: Boqun Feng
Suggested-by: Peter Zijlstra (Intel)
Signed-off-by: Pan Xinhui
---
arch/powerpc/include/asm/spinlock.h | 8
1 file changed, 8 insertions(+)
diff --git a/arch/powerpc/include/
u has been preempted.
Signed-off-by: Pan Xinhui
Acked-by: Radim Krčmář
---
Documentation/virtual/kvm/msr.txt | 9 -
1 file changed, 8 insertions(+), 1 deletion(-)
diff --git a/Documentation/virtual/kvm/msr.txt
b/Documentation/virtual/kvm/msr.txt
index 2a71c8f..ab2ab76 100644
--- a/Docum
the spin loops upon on the retval of
vcpu_is_preempted.
As kernel has used this interface, So lets support it.
To deal with kernel and kvm/xen, add vcpu_is_preempted into struct
pv_lock_ops.
Then kvm or xen could provide their own implementation to support
vcpu_is_preempted.
Signed-off-by: Pan
It allows us to update some status or field of one struct partially.
We can also save one kvm_read_guest_cached if we just update one filed
of the struct regardless of its current value.
Signed-off-by: Pan Xinhui
---
include/linux/kvm_host.h | 2 ++
virt/kvm/kvm_main.c | 20
essaging [kernel.vmlinux] [k] system_call
2.69% sched-messaging [kernel.vmlinux] [k] wait_consider_task
Signed-off-by: Pan Xinhui
Acked-by: Christian Borntraeger
Tested-by: Juergen Gross
---
kernel/locking/mutex.c | 15 +--
kernel/locking/rwsem-xadd.c | 16 +---
From: Christian Borntraeger
this implements the s390 backend for commit
"kernel/sched: introduce vcpu preempted check interface"
by reworking the existing smp_vcpu_scheduled into
arch_vcpu_is_preempted. We can then also get rid of the
local cpu_is_preempted function by moving the
在 2016/10/29 03:38, Konrad Rzeszutek Wilk 写道:
On Fri, Oct 28, 2016 at 04:11:16AM -0400, Pan Xinhui wrote:
change from v5:
spilt x86/kvm patch into guest/host part.
introduce kvm_write_guest_offset_cached.
fix some typos.
rebase patch onto 4.9.2
change from v4
rom d4fa3ea0b8b6f3e5ff511604a4a6665d1cbb74c3 Mon Sep 17 00:00:00 2001
From: Pan Xinhui
Date: Sat, 17 Dec 2016 02:56:33 -0500
Subject: [PATCH] kvm: fix compile issue
we revert commit 0b9f6c4615c993d2b552e0d2bd1ade49b56e5beb which calls
sleep function while preempt_disable on host part. But we remove str
hi, Andrea
thanks for your reply. :)
在 2016/12/19 19:42, Andrea Arcangeli 写道:
Hello,
On Wed, Nov 02, 2016 at 05:08:35AM -0400, Pan Xinhui wrote:
Support the vcpu_is_preempted() functionality under KVM. This will
enhance lock performance on overcommitted hosts (more runnable vcpus
than
endian system, the converting from/to bool to/from int will cause
error for proc items.
This patch use a new proc_handler proc_dobool to fixe it.
^^^fix^^^
Signed-off-by: Jia He
---
other than that is okay for me.
Reviewed-by: Pan Xinhui
在 2016/12/15 15:24, Jia He 写道:
This is to let bool variable could be correctly displayed in
big/little endian sysctl procfs. sizeof(bool) is arch dependent,
proc_dobool should work in all arches.
Suggested-by: Pan Xinhui
Signed-off-by: Jia He
---
include/linux/sysctl.h | 2 ++
kernel
在 2017/1/5 16:23, Ingo Molnar 写道:
* Pan Xinhui wrote:
If prev node is not in runnig state or its cpu is preempted, we need
wait early in pv_wait_node. After commit "sched/core: Introduce the
vcpu_is_preempted(cpu) interface" kernel has knowledge of one vcpu is
running or not. S
在 2017/1/4 17:41, Peter Zijlstra 写道:
On Tue, Jan 03, 2017 at 05:07:54PM -0500, Waiman Long wrote:
On 01/03/2017 11:18 AM, Peter Zijlstra wrote:
On Sun, Dec 25, 2016 at 03:26:01PM -0500, Waiman Long wrote:
A number of cmpxchg calls in qspinlock_paravirt.h were replaced by more
relaxed
will introduce latency and a little overhead. And
we do NOT want to suffer any latency on some cases, e.g. in interrupt handler.
The second parameter *confer* can indicate such case.
__spin_wake_cpu is simpiler, it will wake up one vcpu regardless of its
current vcpu state.
Signed-off-by: Pan
two endianness
system.
We override some arch_spin_xxx as powerpc has io_sync stuff which makes
sure the io operations are protected by the lock correctly.
There is another special case, see commit
2c610022711 ("locking/qspinlock: Fix spin_unlock_wait() some more")
Signed-off-by: Pan Xinh
1008.3 1122.61134.2
=
System Benchmarks Index Score 1072.0 1108.91050.6
--------
Pan Xinhui (6):
pv-qspin
pSeries run as a guest and might need pv-qspinlock.
Signed-off-by: Pan Xinhui
---
arch/powerpc/kernel/Makefile | 1 +
arch/powerpc/platforms/pseries/Kconfig | 8
2 files changed, 9 insertions(+)
diff --git a/arch/powerpc/kernel/Makefile b/arch/powerpc/kernel/Makefile
index
in the hash table might not be the correct lock holder, as for
performace issue, we does not take care of hash conflict.
Also introduce spin_lock_holder, which tells who owns the lock now.
currently the only user is spin_unlock_wait.
Signed-off-by: Pan Xinhui
---
arch/powerpc/include/asm
-by: Boqun Feng
Signed-off-by: Pan Xinhui
---
kernel/locking/qspinlock_paravirt.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/kernel/locking/qspinlock_paravirt.h
b/kernel/locking/qspinlock_paravirt.h
index 8a99abf..ce655aa 100644
--- a/kernel/locking/qspinlock_paravirt.h
pseries will use qspinlock by default.
Signed-off-by: Pan Xinhui
---
arch/powerpc/platforms/pseries/Kconfig | 1 +
1 file changed, 1 insertion(+)
diff --git a/arch/powerpc/platforms/pseries/Kconfig
b/arch/powerpc/platforms/pseries/Kconfig
index bec90fb..f669323 100644
--- a/arch/powerpc
在 2016/11/15 23:47, Peter Zijlstra 写道:
On Wed, Nov 02, 2016 at 05:08:33AM -0400, Pan Xinhui wrote:
diff --git a/arch/x86/include/asm/paravirt_types.h
b/arch/x86/include/asm/paravirt_types.h
index 0f400c0..38c3bb7 100644
--- a/arch/x86/include/asm/paravirt_types.h
+++ b/arch/x86/include/asm
在 2016/11/16 18:23, Peter Zijlstra 写道:
On Wed, Nov 16, 2016 at 12:19:09PM +0800, Pan Xinhui wrote:
Hi, Peter.
I think we can avoid a function call in a simpler way. How about below
static inline bool vcpu_is_preempted(int cpu)
{
/* only set in pv case
tem Call Overhead | 10385653.0 lps | 10419979.0 lps
Christian Borntraeger (1):
s390/spinlock: Provide vcpu_is_preempted
Juergen Gross (1):
x86, xen: support vcpu preempted check
Pan Xinhui (9):
kernel/sched: introduce vcpu preempted check interface
locking/o
It allows us to update some status or field of one struct partially.
We can also save one kvm_read_guest_cached if we just update one filed
of the struct regardless of its current value.
Signed-off-by: Pan Xinhui
Acked-by: Paolo Bonzini
---
include/linux/kvm_host.h | 2 ++
virt/kvm
From: Christian Borntraeger
this implements the s390 backend for commit
"kernel/sched: introduce vcpu preempted check interface"
by reworking the existing smp_vcpu_scheduled into
arch_vcpu_is_preempted. We can then also get rid of the
local cpu_is_preempted function by moving the
essaging [kernel.vmlinux] [k] system_call
2.69% sched-messaging [kernel.vmlinux] [k] wait_consider_task
Signed-off-by: Pan Xinhui
Acked-by: Christian Borntraeger
Acked-by: Paolo Bonzini
Tested-by: Juergen Gross
---
kernel/locking/mutex.c | 13 +++--
kernel/locking/rwsem-xad
.
Suggested-by: Peter Zijlstra (Intel)
Signed-off-by: Pan Xinhui
Acked-by: Christian Borntraeger
Acked-by: Paolo Bonzini
Tested-by: Juergen Gross
---
include/linux/sched.h | 12
1 file changed, 12 insertions(+)
diff --git a/include/linux/sched.h b/include/linux/sched.h
index
->yiled_count keeps zero on
PowerNV. So we can just skip the machine type check.
Suggested-by: Boqun Feng
Suggested-by: Peter Zijlstra (Intel)
Signed-off-by: Pan Xinhui
---
arch/powerpc/include/asm/spinlock.h | 8
1 file changed, 8 insertions(+)
diff --git a/arch/powerpc/include/asm/spinloc
call_common
2.83% sched-messaging [kernel.vmlinux] [k] copypage_power7
2.64% sched-messaging [kernel.vmlinux] [k] rwsem_spin_on_owner
2.00% sched-messaging [kernel.vmlinux] [k] osq_lock
Suggested-by: Boqun Feng
Signed-off-by: Pan Xinhui
Acked-by: Christian Borntraeger
Acked-by: Pa
concurrent) | 3531.4 lpm | 3211.9 lpm
System Call Overhead | 10385653.0 lps | 10419979.0 lps
Signed-off-by: Pan Xinhui
Acked-by: Paolo Bonzini
---
arch/x86/kernel/kvm.c | 12
1 file changed, 12 insertions(+)
diff --git a/arch/x86/kernel/kvm.c
the spin loops upon the retval of
vcpu_is_preempted.
As kernel has used this interface, So lets support it.
To deal with kernel and kvm/xen, add vcpu_is_preempted into struct
pv_lock_ops.
Then kvm or xen could provide their own implementation to support
vcpu_is_preempted.
Signed-off-by: Pan Xinhui
.
A quick test (4 vcpus on 1 physical cpu doing a parallel build job
with "make -j 8") reduced system time by about 5% with this patch.
Signed-off-by: Juergen Gross
Signed-off-by: Pan Xinhui
---
arch/x86/xen/spinlock.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/ar
u has been preempted.
Signed-off-by: Pan Xinhui
Acked-by: Radim Krčmář
Acked-by: Paolo Bonzini
---
Documentation/virtual/kvm/msr.txt | 9 -
1 file changed, 8 insertions(+), 1 deletion(-)
diff --git a/Documentation/virtual/kvm/msr.txt
b/Documentation/virtual/kvm/msr.txt
index 2a7
kvm_steal_time ::preempted to indicate that if
one vcpu is running or not.
Signed-off-by: Pan Xinhui
Acked-by: Paolo Bonzini
---
arch/x86/include/uapi/asm/kvm_para.h | 4 +++-
arch/x86/kvm/x86.c | 16
2 files changed, 19 insertions(+), 1 deletion(-)
diff --git
Once xmon is triggered by sysrq-x, it is enabled always afterwards even
if it is disabled during boot. This will cause a system reset interrut
fail to dump. So keep xmon in its original state after exit.
Signed-off-by: Pan Xinhui
---
arch/powerpc/xmon/xmon.c | 5 -
1 file changed, 4
If prev node is not in runnig state or its vCPU is preempted, we can give
up our vCPU slices ASAP in pv_wait_node. After commit d9345c65eb79
("sched/core: Introduce the vcpu_is_preempted(cpu) interface") kernel
has knowledge of one vCPU is running or not.
Signed-off-by: Pan Xinh
commands. Turn xmon off if 'z'
is following.
Signed-off-by: Pan Xinhui
---
arch/powerpc/xmon/xmon.c | 12 +---
1 file changed, 9 insertions(+), 3 deletions(-)
diff --git a/arch/powerpc/xmon/xmon.c b/arch/powerpc/xmon/xmon.c
index 9c0e17c..2f4e7b1 100644
--- a/arch/powerpc/xmon/xmon.c
在 2017/2/16 18:57, Guilherme G. Piccoli 写道:
On 16/02/2017 03:09, Michael Ellerman wrote:
Pan Xinhui writes:
Once xmon is triggered by sysrq-x, it is enabled always afterwards even
if it is disabled during boot. This will cause a system reset interrut
fail to dump. So keep xmon in its
在 2017/2/17 14:05, Michael Ellerman 写道:
Pan Xinhui writes:
diff --git a/arch/powerpc/xmon/xmon.c b/arch/powerpc/xmon/xmon.c
index 9c0e17c..f6e5c3d 100644
--- a/arch/powerpc/xmon/xmon.c
+++ b/arch/powerpc/xmon/xmon.c
@@ -76,6 +76,7 @@ static int xmon_gate;
#endif /* CONFIG_SMP */
static
--- ---
4 4053.3 Mop/s 4223.7 Mop/s +4.2%
8 3310.4 Mop/s 3406.0 Mop/s +2.9%
12 2576.4 Mop/s 2674.6 Mop/s +3.8%
Signed-off-by: Waiman Long
---
Works on my side :)
Reviewed-by: Pan Xinhui
v4->v5:
- Correct some grammatical iss
->yiled_count keeps zero on
powerNV. So we can just skip the machine type check.
Suggested-by: Boqun Feng
Suggested-by: Peter Zijlstra (Intel)
Signed-off-by: Pan Xinhui
---
arch/powerpc/include/asm/spinlock.h | 8
1 file changed, 8 insertions(+)
diff --git a/arch/powerpc/include/
.
Suggested-by: Peter Zijlstra (Intel)
Signed-off-by: Pan Xinhui
Acked-by: Christian Borntraeger
Tested-by: Juergen Gross
---
include/linux/sched.h | 12
1 file changed, 12 insertions(+)
diff --git a/include/linux/sched.h b/include/linux/sched.h
index 348f51b..44c1ce7 100644
ripts (8 concurrent) | 3531.4 lpm | 3211.9 lpm
System Call Overhead | 10385653.0 lps | 10419979.0 lps
Christian Borntraeger (1):
s390/spinlock: Provide vcpu_is_preempted
Juergen Gross (1):
x86, xen: support vcpu preempted check
Pan Xinhui (7):
kernel/sched: i
the spin loops upon on the retval of
vcpu_is_preempted.
As kernel has used this interface, So lets support it.
To deal with kernel and kvm/xen, add vcpu_is_preempted into struct
pv_lock_ops.
Then kvm or xen could provide their own implementation to support
vcpu_is_preempted.
Signed-off-by: Pan
essaging [kernel.vmlinux] [k] system_call
2.69% sched-messaging [kernel.vmlinux] [k] wait_consider_task
Signed-off-by: Pan Xinhui
Acked-by: Christian Borntraeger
Tested-by: Juergen Gross
---
kernel/locking/mutex.c | 15 +--
kernel/locking/rwsem-xadd.c | 16 +---
Call Overhead | 10385653.0 lps | 10419979.0 lps
Signed-off-by: Pan Xinhui
---
arch/x86/include/uapi/asm/kvm_para.h | 3 ++-
arch/x86/kernel/kvm.c| 12
arch/x86/kvm/x86.c | 18 ++
3 files changed, 32 insertions(+), 1
.
A quick test (4 vcpus on 1 physical cpu doing a parallel build job
with "make -j 8") reduced system time by about 5% with this patch.
Signed-off-by: Juergen Gross
Signed-off-by: Pan Xinhui
---
arch/x86/xen/spinlock.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/ar
n preempted.
Signed-off-by: Pan Xinhui
---
Documentation/virtual/kvm/msr.txt | 8 +++-
1 file changed, 7 insertions(+), 1 deletion(-)
diff --git a/Documentation/virtual/kvm/msr.txt
b/Documentation/virtual/kvm/msr.txt
index 2a71c8f..3376f13 100644
--- a/Documentation/virtual/kvm/msr.txt
call_common
2.83% sched-messaging [kernel.vmlinux] [k] copypage_power7
2.64% sched-messaging [kernel.vmlinux] [k] rwsem_spin_on_owner
2.00% sched-messaging [kernel.vmlinux] [k] osq_lock
Suggested-by: Boqun Feng
Signed-off-by: Pan Xinhui
Acked-by: Christian Borntraeger
Tested-by: Ju
From: Christian Borntraeger
this implements the s390 backend for commit
"kernel/sched: introduce vcpu preempted check interface"
by reworking the existing smp_vcpu_scheduled into
arch_vcpu_is_preempted. We can then also get rid of the
local cpu_is_preempted function by moving the
call_common
2.83% sched-messaging [kernel.vmlinux] [k] copypage_power7
2.64% sched-messaging [kernel.vmlinux] [k] rwsem_spin_on_owner
2.00% sched-messaging [kernel.vmlinux] [k] osq_lock
Suggested-by: Boqun Feng
Signed-off-by: Pan Xinhui
---
kernel/locking/osq_lock.c | 10 +-
1 f
.
Suggested-by: Peter Zijlstra (Intel)
Signed-off-by: Pan Xinhui
---
include/linux/sched.h | 12
1 file changed, 12 insertions(+)
diff --git a/include/linux/sched.h b/include/linux/sched.h
index 348f51b..44c1ce7 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
essaging [kernel.vmlinux] [k] system_call
2.69% sched-messaging [kernel.vmlinux] [k] wait_consider_task
Signed-off-by: Pan Xinhui
---
kernel/locking/mutex.c | 15 +--
kernel/locking/rwsem-xadd.c | 16 +---
2 files changed, 26 insertions(+), 5 deletions(-)
diff --git
ncurrent) |23224.3 lpm |22607.4 lpm
Shell Scripts (8 concurrent) | 3531.4 lpm | 3211.9 lpm
System Call Overhead | 10385653.0 lps | 10419979.0 lps
Pan Xinhui (5):
kernel/sched: introduce vcpu preempted check interface
locking/osq: Drop the
Scripts (1 concurrent) |23224.3 lpm |22607.4 lpm
Shell Scripts (8 concurrent) | 3531.4 lpm | 3211.9 lpm
System Call Overhead | 10385653.0 lps | 10419979.0 lps
Signed-off-by: Pan Xinhui
---
arch/x86/include/asm/paravirt_types.h | 6
->yiled_count keeps zero on
powerNV. So we can just skip the machine type check.
Suggested-by: Boqun Feng
Suggested-by: Peter Zijlstra (Intel)
Signed-off-by: Pan Xinhui
---
arch/powerpc/include/asm/spinlock.h | 8
1 file changed, 8 insertions(+)
diff --git a/arch/powerpc/include/
在 2016/10/19 23:58, Juergen Gross 写道:
On 19/10/16 12:20, Pan Xinhui wrote:
change from v3:
add x86 vcpu preempted check patch
change from v2:
no code change, fix typos, update some comments
change from v1:
a simplier definition of default vcpu_is_preempted
skip
在 2016/10/20 01:24, Radim Krčmář 写道:
2016-10-19 06:20-0400, Pan Xinhui:
This is to fix some lock holder preemption issues. Some other locks
implementation do a spin loop before acquiring the lock itself.
Currently kernel has an interface of bool vcpu_is_preempted(int cpu). It
takes the cpu
orrect cpu number.
Signed-off-by: Pan Xinhui
---
tools/perf/bench/futex-hash.c | 2 +-
tools/perf/bench/futex-lock-pi.c | 2 +-
tools/perf/bench/futex-requeue.c | 2 +-
tools/perf/bench/futex-wake-parallel.c | 2 +-
tools/perf/bench/futex-wake.c | 2 +-
tools/
在 2017/2/8 14:09, Boqun Feng 写道:
On Wed, Feb 08, 2017 at 12:05:40PM +0800, Boqun Feng wrote:
On Wed, Feb 08, 2017 at 11:39:10AM +0800, Xinhui Pan wrote:
2016-12-26 4:26 GMT+08:00 Waiman Long :
A number of cmpxchg calls in qspinlock_paravirt.h were replaced by more
relaxed versions to
在 2017/2/8 14:09, Boqun Feng 写道:
On Wed, Feb 08, 2017 at 12:05:40PM +0800, Boqun Feng wrote:
On Wed, Feb 08, 2017 at 11:39:10AM +0800, Xinhui Pan wrote:
2016-12-26 4:26 GMT+08:00 Waiman Long :
A number of cmpxchg calls in qspinlock_paravirt.h were replaced by more
relaxed versions to
在 2017/2/8 14:09, Boqun Feng 写道:
On Wed, Feb 08, 2017 at 12:05:40PM +0800, Boqun Feng wrote:
On Wed, Feb 08, 2017 at 11:39:10AM +0800, Xinhui Pan wrote:
2016-12-26 4:26 GMT+08:00 Waiman Long :
A number of cmpxchg calls in qspinlock_paravirt.h were replaced by more
relaxed versions to
在 2016/12/2 12:35, yjin 写道:
On 2016年12月02日 12:22, Balbir Singh wrote:
On Fri, Dec 2, 2016 at 3:15 PM, Michael Ellerman wrote:
yanjiang@windriver.com writes:
diff --git a/arch/powerpc/include/asm/cputime.h
b/arch/powerpc/include/asm/cputime.h
index 4f60db0..4423e97 100644
---
Avoid a function call under native version of qspinlock. On powerNV,
bafore applying this patch, every unlock is expensive. This small
optimizes enhance the performance.
We use static_key with jump_label which removes unnecessary loads of
lppaca and its stuff.
Signed-off-by: Pan Xinhui
1134.2
=
System Benchmarks Index Score 1072.0 1108.91050.6
--------
Pan Xinhui (6):
powerpc/qspinlock: powerpc support qspinlock
powerpc
pSeries run as a guest and might need pv-qspinlock.
Signed-off-by: Pan Xinhui
---
arch/powerpc/kernel/Makefile | 1 +
arch/powerpc/platforms/pseries/Kconfig | 8
2 files changed, 9 insertions(+)
diff --git a/arch/powerpc/kernel/Makefile b/arch/powerpc/kernel/Makefile
index
will introduce latency and a little overhead. And we
do NOT want to suffer any latency on some cases, e.g. in interrupt handler.
The second parameter *confer* can indicate such case.
__spin_wake_cpu is simpiler, it will wake up one vcpu regardless of its
current vcpu state.
Signed-off-by: Pan
in the hash table might not be the correct lock holder, as for
performace issue, we does not take care of hash conflict.
Also introduce spin_lock_holder, which tells who owns the lock now.
currently the only user is spin_unlock_wait.
Signed-off-by: Pan Xinhui
---
arch/powerpc/include/asm
pSeries/powerNV will use qspinlock from now on.
Signed-off-by: Pan Xinhui
---
arch/powerpc/platforms/pseries/Kconfig | 8
1 file changed, 8 insertions(+)
diff --git a/arch/powerpc/platforms/pseries/Kconfig
b/arch/powerpc/platforms/pseries/Kconfig
index bec90fb..8a87d06 100644
endianness
system.
We override some arch_spin_XXX as powerpc has io_sync stuff which makes
sure the io operations are protected by the lock correctly.
There is another special case, see commit
2c610022711 ("locking/qspinlock: Fix spin_unlock_wait() some more")
Signed-off-by: Pan Xinhui
hi, jia
nice catch!
However I think we should fix it totally.
This is because do_proc_dointvec_conv() try to get a int value from a bool *.
something like below might help. pls. ignore the code style and this is tested
:)
diff --git a/fs/lockd/svc.c b/fs/lockd/svc.c
index
在 2016/12/11 23:36, Jia He 写道:
nsm_use_hostnames is a module parameter and it will be exported to sysctl
procfs. This is to let user sometimes change it from userspace. But the
minimal unit for sysctl procfs read/write it sizeof(int).
In big endian system, the converting from/to bool to/from
在 2016/12/12 01:43, Pan Xinhui 写道:
hi, jia
nice catch!
However I think we should fix it totally.
This is because do_proc_dointvec_conv() try to get a int value from a bool *.
something like below might help. pls. ignore the code style and this is tested
在 2016/9/29 23:51, Christian Borntraeger 写道:
this implements the s390 backend for commit
"kernel/sched: introduce vcpu preempted check interface"
by reworking the existing smp_vcpu_scheduled into
arch_vcpu_is_preempted. We can then also get rid of the
local cpu_is_preempted function by moving
在 2016/9/29 18:31, Peter Zijlstra 写道:
On Thu, Sep 29, 2016 at 12:23:19PM +0200, Christian Borntraeger wrote:
On 09/29/2016 12:10 PM, Peter Zijlstra wrote:
On Thu, Jul 21, 2016 at 07:45:10AM -0400, Pan Xinhui wrote:
change from v2:
no code change, fix typos, update some comments
在 2016/9/30 13:52, Boqun Feng 写道:
On Fri, Sep 30, 2016 at 12:49:52PM +0800, Pan Xinhui wrote:
在 2016/9/29 23:51, Christian Borntraeger 写道:
this implements the s390 backend for commit
"kernel/sched: introduce vcpu preempted check interface"
by reworking the existing smp_vcpu
hi, Paolo
thanks for your reply.
在 2016/9/30 14:58, Paolo Bonzini 写道:
Please consider s390 and (x86/arm) KVM. Once we have a few, more can
follow later, but I think its important to not only have PPC support for
this.
Actually the s390 preemted check via sigp sense running is
在 2016/9/30 17:08, Paolo Bonzini 写道:
On 30/09/2016 10:52, Pan Xinhui wrote:
x86 has no hypervisor support, and I'd like to understand the desired
semantics first, so I don't think it should block this series. In
Once a guest do a hypercall or something similar, IOW
From: Pan Xinhui
Implement xchg{u8,u16}{local,relaxed}, and
cmpxchg{u8,u16}{,local,acquire,relaxed}.
It works on all ppc.
Suggested-by: Peter Zijlstra (Intel)
Signed-off-by: Pan Xinhui
---
change from V1:
rework totally.
---
arch/powerpc/include/asm/cmpxchg.h | 83
From: Pan Xinhui
Correct bitoff in big endian OS.
Fixes: 3226aad81aa6 ("sh: support 1 and 2 byte xchg")
Signed-off-by: Pan Xinhui
---
arch/sh/include/asm/cmpxchg-xchg.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/sh/include/asm/cmpxchg-xchg.h
b/arch/
Hello, boqun
On 2016年04月19日 17:18, Boqun Feng wrote:
> Hi Xinhui,
>
> On Tue, Apr 19, 2016 at 02:29:34PM +0800, Pan Xinhui wrote:
>> From: Pan Xinhui
>>
>> Implement xchg{u8,u16}{local,relaxed}, and
>> cmpxchg{u8,u16}{,local,acquire,relaxed}.
>>
&g
From: Pan Xinhui
Correct bitoff in big endian OS.
Current code works correctly for 1 byte but not for 2 bytes.
Fixes: 3226aad81aa6 ("sh: support 1 and 2 byte xchg")
Signed-off-by: Pan Xinhui
Acked-by: Michael S. Tsirkin
---
changes from V1:
just add some patch comment
From: Pan Xinhui
Implement xchg{u8,u16}{local,relaxed}, and
cmpxchg{u8,u16}{,local,acquire,relaxed}.
It works on all ppc.
The basic idea is from commit 3226aad81aa6 ("sh: support 1 and 2 byte xchg")
Suggested-by: Peter Zijlstra (Intel)
Signed-off-by: Pan Xinhui
---
chan
401 - 500 of 645 matches
Mail list logo