of patches works fine.
Feel free to add
Tested-by: Raghavendra K T #kvm pv
As far as performance is concerned (with my 16core +ht machine having
16vcpu guests [ even w/ , w/o the lfsr hash patchset ]), I do not see
any significant observations to report, though I understand that we
could see much
On 03/20/2015 02:38 AM, Waiman Long wrote:
On 03/19/2015 06:01 AM, Peter Zijlstra wrote:
[...]
You are probably right. The initial apply_paravirt() was done before the
SMP boot. Subsequent ones were at kernel module load time. I put a
counter in the __native_queue_spin_unlock() and it registere
On 02/24/2015 08:50 PM, Greg KH wrote:
On Tue, Feb 24, 2015 at 03:47:37PM +0100, Ingo Molnar wrote:
* Greg KH wrote:
On Tue, Feb 24, 2015 at 02:54:59PM +0530, Raghavendra K T wrote:
Paravirt spinlock clears slowpath flag after doing unlock.
As explained by Linus currently it does
On 02/24/2015 08:17 PM, Ingo Molnar wrote:
* Greg KH wrote:
On Tue, Feb 24, 2015 at 02:54:59PM +0530, Raghavendra K T wrote:
Paravirt spinlock clears slowpath flag after doing unlock.
As explained by Linus currently it does:
prev = *lock;
add_smp(&
0.02
dbench 1x -1.77
dbench 2x -0.63
[Jeremy: hinted missing TICKET_LOCK_INC for kick]
[Oleg: Moving slowpath flag to head, ticket_equals idea]
[PeterZ: Detailed changelog]
Reported-by: Sasha Levin
Suggested-by: Linus Torvalds
Signed-off-by: Raghavendra K T
Review
On 02/16/2015 10:17 PM, David Vrabel wrote:
On 15/02/15 17:30, Raghavendra K T wrote:
--- a/arch/x86/xen/spinlock.c
+++ b/arch/x86/xen/spinlock.c
@@ -41,7 +41,7 @@ static u8 zero_stats;
static inline void check_zero(void)
{
u8 ret;
- u8 old = ACCESS_ONCE(zero_stats
On 02/15/2015 09:47 PM, Oleg Nesterov wrote:
Well, I regret I mentioned the lack of barrier after enter_slowpath ;)
On 02/15, Raghavendra K T wrote:
@@ -46,7 +46,8 @@ static __always_inline bool static_key_false(struct
static_key *key);
static inline void __ticket_enter_slowpath
* Raghavendra K T [2015-02-15 11:25:44]:
Resending the V5 with smp_mb__after_atomic() change without bumping up
revision
---8<---
>From 0b9ecde30e3bf5b5b24009fd2ac5fc7ac4b81158 Mon Sep 17 00:00:00 2001
From: Raghavendra K T
Date: Fri, 6 Feb 2015 16:44:11 +0530
Subject: [PATCH RESEND V
On 02/15/2015 11:25 AM, Raghavendra K T wrote:
Paravirt spinlock clears slowpath flag after doing unlock.
As explained by Linus currently it does:
prev = *lock;
add_smp(&lock->tickets.head, TICKET_LOCK_INC);
/* add_smp() is a
0.02
dbench 1x -1.77
dbench 2x -0.63
[Jeremy: hinted missing TICKET_LOCK_INC for kick]
[Oleg: Moving slowpath flag to head, ticket_equals idea]
[PeterZ: Detailed changelog]
Reported-by: Sasha Levin
Suggested-by: Linus Torvalds
Signed-off-by: Raghavendra K
On 02/13/2015 09:02 PM, Oleg Nesterov wrote:
On 02/13, Raghavendra K T wrote:
@@ -164,7 +161,7 @@ static inline int arch_spin_is_locked(arch_spinlock_t *lock)
{
struct __raw_tickets tmp = READ_ONCE(lock->tickets);
- return tmp.tail != tmp.head;
+ return tmp.t
0.02
dbench 1x -1.77
dbench 2x -0.63
[Jeremy: hinted missing TICKET_LOCK_INC for kick]
[Oleg: Moving slowpath flag to head, ticket_equals idea]
[PeterZ: Detailed changelog]
Reported-by: Sasha Levin
Suggested-by: Linus Torvalds
Signed-off-by: Raghavendra K
On 02/12/2015 08:30 PM, Peter Zijlstra wrote:
On Thu, Feb 12, 2015 at 05:17:27PM +0530, Raghavendra K T wrote:
[...]
Linus suggested that we should not do any writes to lock after unlock(),
and we can move slowpath clearing to fastpath lock.
So this patch implements the fix with:
1. Moving
On 02/12/2015 07:32 PM, Oleg Nesterov wrote:
Damn, sorry for noise, forgot to mention...
On 02/12, Raghavendra K T wrote:
+static inline void __ticket_check_and_clear_slowpath(arch_spinlock_t *lock,
+ __ticket_t head)
+{
+ if (head
On 02/12/2015 07:20 PM, Oleg Nesterov wrote:
On 02/12, Raghavendra K T wrote:
@@ -191,8 +189,7 @@ static inline void arch_spin_unlock_wait(arch_spinlock_t
*lock)
* We need to check "unlocked" in a loop, tmp.head == head
* can be false positive
On 02/12/2015 07:07 PM, Oleg Nesterov wrote:
On 02/12, Raghavendra K T wrote:
@@ -772,7 +773,8 @@ __visible void kvm_lock_spinning(struct arch_spinlock
*lock, __ticket_t want)
* check again make sure it didn't become free while
* we weren't looking.
*/
x -0.63
[Jeremy: hinted missing TICKET_LOCK_INC for kick]
[Oleg: Moving slowpath flag to head, ticket_equals idea]
Reported-by: Sasha Levin
Suggested-by: Linus Torvalds
Signed-off-by: Raghavendra K T
---
arch/x86/include/asm/spinlock.h | 87 -
ar
On 02/11/2015 11:08 PM, Oleg Nesterov wrote:
On 02/11, Raghavendra K T wrote:
On 02/10/2015 06:56 PM, Oleg Nesterov wrote:
In this case __ticket_check_and_clear_slowpath() really needs to cmpxchg
the whole .head_tail. Plus obviously more boring changes. This needs a
separate patch even _if_
On 02/10/2015 06:56 PM, Oleg Nesterov wrote:
On 02/10, Raghavendra K T wrote:
On 02/10/2015 06:23 AM, Linus Torvalds wrote:
add_smp(&lock->tickets.head, TICKET_LOCK_INC);
if (READ_ONCE(lock->tickets.tail) & TICKET_SLOWPATH_FLAG) ..
into something like
On 02/10/2015 06:23 AM, Linus Torvalds wrote:
On Mon, Feb 9, 2015 at 4:02 AM, Peter Zijlstra wrote:
On Mon, Feb 09, 2015 at 03:04:22PM +0530, Raghavendra K T wrote:
So we have 3 choices,
1. xadd
2. continue with current approach.
3. a read before unlock and also after that.
For the truly
Ccing Davidlohr, (sorry that I got confused with similar address in cc
list).
On 02/09/2015 08:44 PM, Oleg Nesterov wrote:
On 02/09, Raghavendra K T wrote:
+static inline void __ticket_check_and_clear_slowpath(arch_spinlock_t *lock)
+{
+ arch_spinlock_t old, new;
+ __ticket_t diff
ll
could be set when somebody does arch_trylock. Handle that too by ignoring
slowpath flag during lock availability check.
[Jeremy: hinted missing TICKET_LOCK_INC for kick]
Reported-by: Sasha Levin
Suggested-by: Linus Torvalds
Signed-off-by: Raghavendra K T
---
ar
On 02/09/2015 05:32 PM, Peter Zijlstra wrote:
On Mon, Feb 09, 2015 at 03:04:22PM +0530, Raghavendra K T wrote:
So we have 3 choices,
1. xadd
2. continue with current approach.
3. a read before unlock and also after that.
For the truly paranoid we have probe_kernel_address(), suppose the lock
On 02/09/2015 02:44 AM, Jeremy Fitzhardinge wrote:
On 02/06/2015 06:49 AM, Raghavendra K T wrote:
[...]
Linus suggested that we should not do any writes to lock after unlock(),
and we can move slowpath clearing to fastpath lock.
Yep, that seems like a sound approach.
Current approach
On 02/07/2015 12:27 AM, Sasha Levin wrote:
On 02/06/2015 09:49 AM, Raghavendra K T wrote:
Paravirt spinlock clears slowpath flag after doing unlock.
As explained by Linus currently it does:
prev = *lock;
add_smp(&lock->tickets.head, TICKET_L
On 02/06/2015 09:55 PM, Linus Torvalds wrote:
On Fri, Feb 6, 2015 at 6:49 AM, Raghavendra K T
wrote:
Paravirt spinlock clears slowpath flag after doing unlock.
[ fix edited out ]
So I'm not going to be applying this for 3.19, because it's much too
late and the patch is too scary
ll
could be set when somebody does arch_trylock. Handle that too by ignoring
slowpath flag during lock availability check.
Reported-by: Sasha Levin
Suggested-by: Linus Torvalds
Signed-off-by: Raghavendra K T
---
arch/x86/include/asm/spinlock.h | 70 -
1 file chang
On 01/21/2015 01:42 AM, Waiman Long wrote:
This patch renames the paravirt_ticketlocks_enabled static key to a
more generic paravirt_spinlocks_enabled name.
Signed-off-by: Waiman Long
Signed-off-by: Peter Zijlstra
---
Reviewed-by: Raghavendra K T
--
To unsubscribe from this list: send the
On 11/28/2014 04:28 PM, Christian Borntraeger wrote:
Am 28.11.2014 um 11:08 schrieb Raghavendra KT:
Was able to test the patch, here is the result: I have not tested with
bigger VMs though. Results make it difficult to talk about any side
effect of
patch if any.
Thanks a log.
If our assumptio
On 08/21/2014 09:38 PM, Radim Krčmář wrote:
ple_window is preserved in VMCS, so can write it only after a change.
Do this by keeping a dirty bit.
Signed-off-by: Radim Krčmář
Reviewed-by: Raghavendra KT
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a messa
On 08/21/2014 09:38 PM, Radim Krčmář wrote:
ple_window is updated on every vmentry, so there is no reason to have it
read-only anymore.
ple_window* weren't writable to prevent runtime overflow races;
they are prevented by a seqlock.
Signed-off-by: Radim Krčmář
---
arch/x86/kvm/vmx.c | 46
On 08/21/2014 09:38 PM, Radim Krčmář wrote:
Window is increased on every PLE exit and decreased on every sched_in.
The idea is that we don't want to PLE exit if there is no preemption
going on.
We do this with sched_in() because it does not hold rq lock.
There are two new kernel parameters for c
On 08/21/2014 09:38 PM, Radim Krčmář wrote:
Change PLE window into per-VCPU variable, seeded from module parameter,
to allow greater flexibility.
Brings in a small overhead on every vmentry.
Signed-off-by: Radim Krčmář
with intelligent update in patch 7
Reviewed-by: Raghavendra KT
--
To un
On 08/21/2014 09:38 PM, Radim Krčmář wrote:
sched_in preempt notifier is available for x86, allow its use in
specific virtualization technlogies as well.
Signed-off-by: Radim Krčmář
Reviewed-by: Raghavendra KT
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of
On 08/21/2014 09:38 PM, Radim Krčmář wrote:
Introduce preempt notifiers for architecture specific code.
Advantage over creating a new notifier in every arch is slightly simpler
code and guaranteed call order with respect to kvm_sched_in.
Signed-off-by: Radim Krčmář
---
Reviewed-by: Raghavendr
On 08/21/2014 09:38 PM, Radim Krčmář wrote:
v2 -> v3:
* copy&paste frenzy [v3 4/7] (split modify_ple_window)
* commented update_ple_window_actual_max [v3 4/7]
* renamed shrinker to modifier [v3 4/7]
* removed an extraneous max(new, ple_window) [v3 4/7] (should have been in v2)
* changed
On 08/21/2014 10:00 PM, Paolo Bonzini wrote:
Il 21/08/2014 18:08, Radim Krčmář ha scritto:
v2 -> v3:
* copy&paste frenzy [v3 4/7] (split modify_ple_window)
* commented update_ple_window_actual_max [v3 4/7]
* renamed shrinker to modifier [v3 4/7]
* removed an extraneous max(new, ple_windo
K T
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
guest cpus 32 host cpus.
Signed-off-by: Christian Borntraeger
CC: Rik van Riel
CC: Raghavendra K T
CC: Michael Mueller
---
Please feel free to add
Reviewed-by: Raghavendra K T
I could see very small improvement while testing 32 vcpu guest booting
on x86 (16 pcpu host +ht).
I was just
For baremetal we continue to have 'fully fair ticketlock' with this patch
series.
But but but, we're looking at removing ticket locks. So why do we want
to invest in them now?
I have nothing against qspinlock. I am happy to test it/add any bit to
it if I could.
With this patch we get exc
On 07/01/2014 01:35 PM, Peter Zijlstra wrote:
On Sat, Jun 28, 2014 at 02:47:04PM +0530, Raghavendra K T wrote:
In virtualized environment there are mainly three problems
related to spinlocks that affects performance.
1. LHP (lock holder preemption)
2. Lock Waiter Preemption (LWP)
3. Starvation
property of
fair locks.
Baremetal:
No significant performnce difference even for CONFIG_PARAVIRT_SPINLOCK enabled
on baremetal
Signed-off-by: Raghavendra K T
---
arch/x86/include/asm/spinlock.h | 71 +--
arch/x86/include/asm/spinlock_types.h | 46
On 06/15/2014 06:17 PM, Peter Zijlstra wrote:
Signed-off-by: Peter Zijlstra
---
[...]
+
+void kvm_wait(int *ptr, int val)
+{
+ unsigned long flags;
+
+ if (in_nmi())
+ return;
+
+ /*
+* Make sure an interrupt handler can't upset things in a
+* par
On 05/30/2014 04:15 AM, Waiman Long wrote:
On 05/28/2014 08:16 AM, Raghavendra K T wrote:
- we need an intelligent way to nullify the effect of batching for
baremetal
(because extra cmpxchg is not required).
To do this, you will need to have 2 slightly different algorithms
depending on the
On 05/29/2014 12:16 PM, Peter Zijlstra wrote:
On Wed, May 28, 2014 at 05:46:39PM +0530, Raghavendra K T wrote:
In virtualized environment there are mainly three problems
related to spinlocks that affect performance.
1. LHP (lock holder preemption)
2. Lock Waiter Preemption (LWP)
3. Starvation
On 05/29/2014 03:25 AM, Rik van Riel wrote:
On 05/28/2014 08:16 AM, Raghavendra K T wrote:
This patch looks very promising.
Thank you Rik.
[...]
- My kernbench/ebizzy test on baremetal (32 cpu +ht sandybridge) did not seem to
show the impact of extra cmpxchg. but there should be effect
better)
base48.9 sec
patched 48.8 sec
Signed-off-by: Raghavendra K T
---
arch/x86/include/asm/spinlock.h | 35 +--
arch/x86/include/asm/spinlock_types.h | 14 ++
arch/x86/kernel/kvm.c | 6 --
3 files changed, 39 insertions
ity ones. The
main purpose is to make the lock contention problems more tolerable
until someone can spend the time and effort to fix them.
For kvm part feel free to add:
Tested-by: Raghavendra K T
V9 testing has shown no hangs.
I was able to do some performance testing. here are the results:
On 04/17/2014 10:53 PM, Konrad Rzeszutek Wilk wrote:
On Thu, Apr 17, 2014 at 11:03:52AM -0400, Waiman Long wrote:
v8->v9:
- Integrate PeterZ's version of the queue spinlock patch with some
modification:
http://lkml.kernel.org/r/20140310154236.038181...@infradead.org
- Break the m
On 04/09/2014 12:45 AM, Waiman Long wrote:
Yes, I am able to reproduce the hang problem with ebizzy. BTW, could you
try to apply the attached patch file on top of the v8 patch series to
see if it can fix the hang problem?
Ran the benchmarks with the fix and I am not seeing hang so far.
ebizzy i
On 04/07/2014 10:08 PM, Waiman Long wrote:
On 04/07/2014 02:14 AM, Raghavendra K T wrote:
[...]
But I am seeing hang in overcommit cases. Gdb showed that many vcpus
are halted and there was no progress. Suspecting the problem /race with
halting, I removed the halt() part of kvm_hibernate(). I
On 04/02/2014 06:57 PM, Waiman Long wrote:
N.B. Sorry for the duplicate. This patch series were resent as the
original one was rejected by the vger.kernel.org list server
due to long header. There is no change in content.
v7->v8:
- Remove one unneeded atomic operation from the slo
On 10/30/2013 07:53 PM, Greg KH wrote:
[...]
diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
index a0e2a8a..e475fdb 100644
--- a/arch/x86/kernel/kvm.c
+++ b/arch/x86/kernel/kvm.c
@@ -622,7 +622,7 @@ static int __init kvm_spinlock_debugfs(void)
d_kvm = kvm_init_debugfs();
On 10/30/2013 01:03 AM, Linus Torvalds wrote:
On Tue, Oct 29, 2013 at 12:27 PM, Raghavendra K T
wrote:
Could one solution be cascading actual error
that is lost in fs/debugfs/inode.c:__create_file(), so that we could
take correct action in case of failure of debugfs_create_dir()?
(ugly side
On 10/30/2013 01:30 AM, Greg KH wrote:
[...]
debugfs_create_dir() currently returns NULL dentry on both
EEXIST, ENOMEM ... cases.
Could one solution be cascading actual error
that is lost in fs/debugfs/inode.c:__create_file(), so that we could
take correct action in case of failure of debugfs_cr
Adding Greg/AI too since we touch debugfs code.
[...]
>>
sudo modprobe kvm_amd
modprobe: ERROR: could not insert 'kvm_amd': Bad address
"Bad address"? Christ people, are you guys making up error numbers
with some kind of dice-roll? I can just see it now, somebody sitting
there with a D20, playi
Since paravirt spinlock optimization are in 3.12 kernel, we have
very good performance benefit for paravirtualized KVM / Xen kernel.
Also we no longer suffer from 5% side effect on native kernel.
Signed-off-by: Raghavendra K T
---
Would like to thank Sander for spotting and suggesting this
On 10/09/2013 02:33 PM, Raghavendra K T wrote:
We use jump label to enable pv-spinlock. With the changes in
(442e0973e927 Merge branch 'x86/jumplabel'), the jump label behaviour has
changed
that would result in evntual hang of the VM since we would end up in a situation
where slow
boost other vcpus, and
dramatically reduce the overhead.
Branch available at:
git://git.kernel.org/pub/scm/linux/kernel/git/maz/arm-platforms.git
kvm-arm64/wfe-trap
Changes from v1:
- Added CONFIG_HAVE_KVM_CPU_RELAX_INTERCEPT, as it seems to give
slightly better results (Thanks to Raghavendr
g and also make jump label enabling after jump_label_init().
Signed-off-by: Raghavendra K T
---
Thanks to Andrew Theurer who reported weird behaviour of pvspinlock
in 3.12-rc that led to my git bisection and investigation and Konrad
for his jump label findings for Xen.
arch/x86/kernel/
On 10/08/2013 08:36 PM, Marc Zyngier wrote:
Just gave it a go, and the results are slightly (but consistently)
worse. Over 10 runs:
Without RELAX_INTERCEPT: Average run 3.3623s
With RELAX_INTERCEPT: Average run 3.4226s
Not massive, but still noticeable. Any clue?
Is it a 4x overcommit? Proba
[...]
+ kvm_vcpu_on_spin(vcpu);
Could you also enable CONFIG_HAVE_KVM_CPU_RELAX_INTERCEPT for arm and
check if ple handler logic helps further?
we would ideally get one more optimization folded into ple handler if
you enable that.
Just gave it a go, and the results are slightly
On 09/12/2013 01:58 PM, Michael S. Tsirkin wrote:
On Thu, Sep 12, 2013 at 01:00:11PM +0530, Raghavendra K T wrote:
Thanks Michael S Tsirkin for rewriting the description and suggestions.
Signed-off-by: Raghavendra K T
Acked-by: Michael S. Tsirkin
Gleb, Paolo,
Does it look good for merge
Thanks Michael S Tsirkin for rewriting the description and suggestions.
Signed-off-by: Raghavendra K T
---
Changes in V3:
Keep msr specific info only as suggested by Michael.
Documentation/virtual/kvm/cpuid.txt | 7 +++
1 file changed, 7 insertions(+)
diff --git a/Documentation
On 09/12/2013 11:14 AM, Michael S. Tsirkin wrote:
On Wed, Sep 04, 2013 at 02:18:46PM +0530, Raghavendra K T wrote:
[...]
--
+KVM_FEATURE_STEAL_TIME || 5 || Steal time available at msr
On 09/04/2013 02:18 PM, Raghavendra K T wrote:
Signed-off-by: Raghavendra K T
---
Changes in V2:
Correction in the description of steal time and added msr info (Michael S
Tsirkin)
Documentation/virtual/kvm/cpuid.txt | 10 ++
1 file changed, 10 insertions(+)
diff --git a
Signed-off-by: Raghavendra K T
---
Changes in V2:
Correction in the description of steal time and added msr info (Michael S
Tsirkin)
Documentation/virtual/kvm/cpuid.txt | 10 ++
1 file changed, 10 insertions(+)
diff --git a/Documentation/virtual/kvm/cpuid.txt
b/Documentation
On 08/26/2013 12:37 PM, Michael S. Tsirkin wrote:
I would change the description to merely say what the CPUID bits
mean, and what they mean is exactly that an MSR is valid.
Use KVM_FEATURE_ASYNC_PF as a template.
Thank you for the review.
Changing the doc accordingly by adding msr info. Please
On 08/26/2013 03:34 PM, Gleb Natapov wrote:
On Mon, Aug 26, 2013 at 02:18:32PM +0530, Raghavendra K T wrote:
This series forms the kvm host part of paravirtual spinlock
based against kvm tree.
Please refer to https://lkml.org/lkml/2013/8/9/265 for
kvm guest and Xen, x86 part merged to
Note that we are using APIC_DM_REMRD which has reserved usage.
In future if APIC_DM_REMRD usage is standardized, then we should
find some other way or go back to old method.
Suggested-by: Gleb Natapov
Signed-off-by: Raghavendra K T
Acked-by: Gleb Natapov
Acked-by: Ingo Molnar
---
arch/x86
both guest and host.
Changes since V12:
fold the patch 3 into patch 2 for bisection. (Eric Northup)
Raghavendra K T (3):
kvm uapi: Add KICK_CPU and PV_UNHALT definition to uapi
kvm hypervisor: Simplify kvm_for_each_vcpu with
kvm_irq_delivery_to_apic
Documentation/kvm : Add
: Raghavendra K T
Acked-by: Gleb Natapov
Acked-by: Ingo Molnar
---
Documentation/virtual/kvm/cpuid.txt | 4
Documentation/virtual/kvm/hypercalls.txt | 14 ++
2 files changed, 18 insertions(+)
diff --git a/Documentation/virtual/kvm/cpuid.txt
b/Documentation/virtual/kvm
this is needed by both guest and host.
Originally-from: Srivatsa Vaddagiri
Signed-off-by: Raghavendra K T
Acked-by: Gleb Natapov
Acked-by: Ingo Molnar
---
arch/x86/include/uapi/asm/kvm_para.h | 1 +
include/uapi/linux/kvm_para.h| 1 +
2 files changed, 2 insertions(+)
diff --git a
suggested by Eric Northup]
Signed-off-by: Raghavendra K T
Acked-by: Gleb Natapov
Acked-by: Ingo Molnar
---
arch/x86/include/asm/kvm_host.h | 5 +
arch/x86/kvm/cpuid.c| 3 ++-
arch/x86/kvm/x86.c | 44 -
3 files changed, 50
Signed-off-by: Raghavendra K T
---
While adding documentation for pvspinlock, I found that these two should
be updated. I have based this on top of pvspinlock kvm host patchset (V12)
Documentation/virtual/kvm/cpuid.txt | 9 +
1 file changed, 9 insertions(+)
diff --git a/Documentation
On 08/14/2013 01:32 AM, Raghavendra K T wrote:
Ingo, below delta patch should fix it, IIRC, I hope you will be folding this
back to patch 14/14 itself. Else please let me.
I have already run allnoconfig, allyesconfig, randconfig with below patch. But
will
test again.
I Did 2 more runs of
On 08/14/2013 01:30 AM, Jeremy Fitzhardinge wrote:
On 08/13/2013 01:02 PM, Raghavendra K T wrote:
[...]
Ingo, below delta patch should fix it, IIRC, I hope you will be folding this
back to patch 14/14 itself. Else please let me.
it was.. s/Please let me know/
[...]
-static DEFINE_PER_CPU
tch. But
will
test again. This should apply on top of tip:x86/spinlocks.
---8<---
From: Raghavendra K T
Fix Namespace collision for lock_waiting
Signed-off-by: Raghavendra K T
---
diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
index d442471..b8ef630 100644
--- a/arch/x86/kerne
* Raghavendra K T [2013-08-09 19:52:02]:
>From 10e92f7911a8aed5b8574f53607ffc5d094d4de1 Mon Sep 17 00:00:00 2001
From: Srivatsa Vaddagiri
Date: Tue, 6 Aug 2013 14:55:41 +0530
Subject: [PATCH V13 RESEND 14/14] kvm : Paravirtual ticketlocks support for
linux
guests running on KVM hypervi
* Raghavendra K T [2013-08-09 19:52:02]:
resending because x86_cpu_to_apicid is defined only for SMP systems.
so fold back kvm_kick_vcpu function into CONFIG_PARAVIRT_SPINLOCK that
depends on SMP. (this was taken out to for pv-flushtlb usage)
---8<---
&g
51)
Changes in V6 posting: (Raghavendra K T)
- Rebased to linux-3.3-rc6.
- used function+enum in place of macro (better type checking)
- use cmpxchg while resetting zero status for possible race
[suggested by Dave Hansen for KVM patches ]
KVM patch Change history:
Changes in V6:
-
.
Signed-off-by: Srivatsa Vaddagiri
Signed-off-by: Suzuki Poulose
[Raghu: check_zero race fix, enum for kvm_contention_stat, jumplabel related
changes,
addition of safe_halt for irq enabled case, bailout spinning in nmi case(Gleb)]
Signed-off-by: Raghavendra K T
Acked-by: Gleb Natapov
Acked-by
remy Fitzhardinge
Signed-off-by: Srivatsa Vaddagiri
Reviewed-by: Konrad Rzeszutek Wilk
Cc: Stephan Diestelhorst
Signed-off-by: Raghavendra K T
Acked-by: Ingo Molnar
---
arch/x86/include/asm/paravirt.h | 2 +-
arch/x86/include/asm/spinlock.h | 86 +--
arch/
The code size expands somewhat, and its better to just call
a function rather than inline it.
Thanks Jeremy for original version of ARCH_NOINLINE_SPIN_UNLOCK config patch,
which is simplified.
Suggested-by: Linus Torvalds
Reviewed-by: Konrad Rzeszutek Wilk
Signed-off-by: Raghavendra K T
Acked
Jeremy Fitzhardinge
Reviewed-by: Konrad Rzeszutek Wilk
Signed-off-by: Raghavendra K T
Acked-by: Ingo Molnar
---
arch/x86/xen/spinlock.c | 46 --
1 file changed, 40 insertions(+), 6 deletions(-)
diff --git a/arch/x86/xen/spinlock.c b/arch/x86/xen/spin
callee-save
calling convention, which defers all the save/restores until the actual
function is called, keeping the fastpath clean.
Signed-off-by: Jeremy Fitzhardinge
Reviewed-by: Konrad Rzeszutek Wilk
Tested-by: Attilio Rao
Signed-off-by: Raghavendra K T
Acked-by: Ingo Molnar
---
arch/x86
ms are probably
specially built for the hardware rather than a generic distro
kernel.
Signed-off-by: Jeremy Fitzhardinge
Reviewed-by: Konrad Rzeszutek Wilk
Tested-by: Attilio Rao
Signed-off-by: Raghavendra K T
Acked-by: Ingo Molnar
---
arch/x86/include/asm/spinlock.h | 10 +-
arch/
From: Jeremy Fitzhardinge
Signed-off-by: Jeremy Fitzhardinge
Reviewed-by: Konrad Rzeszutek Wilk
Signed-off-by: Raghavendra K T
Acked-by: Ingo Molnar
---
arch/x86/xen/spinlock.c | 14 ++
1 file changed, 14 insertions(+)
diff --git a/arch/x86/xen/spinlock.c b/arch/x86/xen
From: Jeremy Fitzhardinge
There's no need to do it at very early init, and doing it there
makes it impossible to use the jump_label machinery.
Signed-off-by: Jeremy Fitzhardinge
Reviewed-by: Konrad Rzeszutek Wilk
Signed-off-by: Raghavendra K T
Acked-by: Ingo Molnar
---
arch/x86/xen/
ff-by: Raghavendra K T
Acked-by: Ingo Molnar
---
arch/x86/include/asm/spinlock.h | 35 +--
1 file changed, 5 insertions(+), 30 deletions(-)
diff --git a/arch/x86/include/asm/spinlock.h b/arch/x86/include/asm/spinlock.h
index 4d54244..7442410 100644
--- a/arch/x86/includ
this is needed by both guest and host.
Originally-from: Srivatsa Vaddagiri
Signed-off-by: Raghavendra K T
Acked-by: Gleb Natapov
Acked-by: Ingo Molnar
---
arch/x86/include/uapi/asm/kvm_para.h | 1 +
include/uapi/linux/kvm_para.h| 1 +
2 files changed, 2 insertions(+)
diff --git a
PLE enabled cases,
and undercommits results are flat
Signed-off-by: Jeremy Fitzhardinge
Reviewed-by: Konrad Rzeszutek Wilk
Tested-by: Attilio Rao
[ Raghavendra: Changed SPIN_THRESHOLD, fixed redefinition of arch_spinlock_t]
Signed-off-by: Raghavendra K T
Acked-by: Ingo Molnar
---
arch/x86/incl
From: Srivatsa Vaddagiri
Signed-off-by: Srivatsa Vaddagiri
Signed-off-by: Suzuki Poulose
Signed-off-by: Raghavendra K T
Acked-by: Gleb Natapov
Acked-by: Ingo Molnar
---
arch/x86/Kconfig | 9 +
1 file changed, 9 insertions(+)
diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index
g for zero status
reset
Reintroduce break since we know the exact vCPU to send IPI as suggested by
Konrad.]
Signed-off-by: Raghavendra K T
Acked-by: Ingo Molnar
---
arch/x86/xen/spinlock.c | 348 +++-
1 file changed, 79 insertions(+), 269 deletions(-)
is patch splits out the rate limiting related
changes from jump_label.h into a new file, jump_label_ratelimit.h, to
resolve the issue.
Signed-off-by: Andrew Jones
Reviewed-by: Konrad Rzeszutek Wilk
Signed-off-by: Raghavendra K T
Acked-by: Ingo Molnar
---
include/linux/jump_label.h
On 08/09/2013 06:34 AM, H. Peter Anvin wrote:
The kbuild test bot is reporting some pretty serious errors for this
patchset. I think these are serious enough that the patchset will need
to be respun.
Sent V13, there were 3 patches in total that changed due to dependency.
--
To unsubscribe f
On 08/09/2013 06:34 AM, H. Peter Anvin wrote:
The kbuild test bot is reporting some pretty serious errors for this
patchset. I think these are serious enough that the patchset will need
to be respun.
There were two problems:
(1) we were including spinlock_types.h in
arch/x86/include/asm/para
On 08/09/2013 06:30 PM, Konrad Rzeszutek Wilk wrote:
My bad. I 'll send out in uniform digit form next time.
If you use 'git format-patch --subject-prefix "PATCH V14" v3.11-rc4..'
and 'git send-email --subject "[PATCH V14] bla blah" ..'
that should be automatically taken care of?
Thanks Kon
On 08/09/2013 04:34 AM, H. Peter Anvin wrote:
Okay, I figured it out.
One of several problems with the formatting of this patchset is that it
has one- and two-digit patch numbers in the headers, which meant that my
scripts tried to apply patch 10 first.
My bad. I 'll send out in uniform digi
On 08/09/2013 06:34 AM, H. Peter Anvin wrote:
The kbuild test bot is reporting some pretty serious errors for this
patchset. I think these are serious enough that the patchset will need
to be respun.
I am working on that.
--
To unsubscribe from this list: send the line "unsubscribe kvm" i
1 - 100 of 571 matches
Mail list logo