Re: Preemptable Ticket Spinlock

2013-04-21 Thread Rik van Riel

On 04/20/2013 06:12 PM, Jiannan Ouyang wrote:

Hello Everyone,

I recently came up with a spinlock algorithm that can adapt to
preemption, which you may be interested in. The intuition is to
downgrade a fair lock to an unfair lock automatically upon preemption,
and preserve the fairness otherwise. It is a guest side optimization,
and can be used as a complementary technique to host side optimizations
like co-scheduling and Pause-Loop Exiting.

In my experiments, it improves VM performance by 5:32X on average, when
running on a non paravirtual VMM, and by 7:91X when running on a VMM
that supports a paravirtual locking interface (using a pv preemptable
ticket spinlock), when executing a set of microbenchmarks as well as a
realistic e-commerce benchmark.

A detailed algorithm description can be found in my VEE 2013 paper,
Preemptable Ticket Spinlocks: Improving Consolidated Performance in the
Cloud
Jiannan Ouyang, John R. Lange
ouyang,jackla...@cs.pitt.edu 
University of Pittsburgh
http://people.cs.pitt.edu/~ouyang/files/publication/preemptable_lock-ouyang-vee13.pdf


Your algorithm is very clever, and very promising.

However, it does increase the size of the struct spinlock, and adds
an additional atomic operation to spin_unlock, neither of which I
suspect are necessary.

If we always incremented the ticket number by 2 (instead of 1), then
we could use the lower bit of the ticket number as the spinlock.

If we do NOT run virtualized, we simply increment the ticket by 2
in spin_unlock, and the code can remain otherwise the same.

If we do run virtualized, we take that spinlock after acquiring
the ticket (or timing out), just like in your code. In the
virtualized spin_unlock, we can then release the spinlock and
increment the ticket in one operation: by simply increasing the
ticket by 1.

In other words, we should be able to keep the overhead of this
to an absolute minimum, and keep spin_unlock to be always the
same cost it is today.

--
All rights reversed
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Preemptable Ticket Spinlock

2013-04-21 Thread Jiannan Ouyang
On Sun, Apr 21, 2013 at 5:12 PM, Rik van Riel  wrote:
> Your algorithm is very clever, and very promising.
>
> However, it does increase the size of the struct spinlock, and adds
> an additional atomic operation to spin_unlock, neither of which I
> suspect are necessary.
>
> If we always incremented the ticket number by 2 (instead of 1), then
> we could use the lower bit of the ticket number as the spinlock.
>
> If we do NOT run virtualized, we simply increment the ticket by 2
> in spin_unlock, and the code can remain otherwise the same.
>
> If we do run virtualized, we take that spinlock after acquiring
> the ticket (or timing out), just like in your code. In the
> virtualized spin_unlock, we can then release the spinlock and
> increment the ticket in one operation: by simply increasing the
> ticket by 1.
>
> In other words, we should be able to keep the overhead of this
> to an absolute minimum, and keep spin_unlock to be always the
> same cost it is today.
>
> --
> All rights reversed

Hi Rik,

Thanks for your feedback.

Yes I agree with you
- increase the size of struct spinlock is unnecessary
- your idea of utilize the lower bit and save one atomic operation
from unlock is cool!

I can come up with a updated patch soon.

--Jiannan
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Preemptable Ticket Spinlock

2013-04-21 Thread Raghavendra K T

On 04/21/2013 03:42 AM, Jiannan Ouyang wrote:

Hello Everyone,

I recently came up with a spinlock algorithm that can adapt to
preemption, which you may be interested in.


It is overall a great and clever idea as Rik mentioned already.

 The intuition is to

downgrade a fair lock to an unfair lock automatically upon preemption,
and preserve the fairness otherwise.


I also hope being little unfair, does not affect the original intention
of introducing ticket spinlocks too.
Some discussions were here long back in this thead,
https://lkml.org/lkml/2010/6/3/331

It is a guest side optimization,

and can be used as a complementary technique to host side optimizations
like co-scheduling and Pause-Loop Exiting.

In my experiments, it improves VM performance by 5:32X on average, when
running on a non paravirtual VMM, and by 7:91X when running on a VMM
that supports a paravirtual locking interface (using a pv preemptable
ticket spinlock), when executing a set of microbenchmarks as well as a
realistic e-commerce benchmark.


AFAIU, the experiments are on non PLE machines and it would be worth 
experimenting on PLE machines too. and also bigger machines.

(we may get some surprises there otherwise).
'll wait for your next iteration of the patches with "using lower bit"
changes.




A detailed algorithm description can be found in my VEE 2013 paper,
Preemptable Ticket Spinlocks: Improving Consolidated Performance in the
Cloud
Jiannan Ouyang, John R. Lange
ouyang,jackla...@cs.pitt.edu 
University of Pittsburgh
http://people.cs.pitt.edu/~ouyang/files/publication/preemptable_lock-ouyang-vee13.pdf

The patch is based on stock Linux kernel 3.5.0, and tested on kernel
3.4.41 as well.
http://www.cs.pitt.edu/~ouyang/files/preemptable_lock.tar.gz

Thanks
--Jiannan

I'm not familiar with patch over email, so I just paste it below, sorry
for the inconvenience.
==
diff --git a/arch/x86/include/asm/spinlock.h
b/arch/x86/include/asm/spinlock.h
index b315a33..895d3b3 100644
--- a/arch/x86/include/asm/spinlock.h
+++ b/arch/x86/include/asm/spinlock.h
@@ -48,18 +48,35 @@
   * in the high part, because a wide xadd increment of the low part
would carry
   * up and contaminate the high part.
   */
+#define TIMEOUT_UNIT (1<<14)


This value seem to be at the higher end. But I hope you have 
experimented enough to come up with this. Better again to test all these 
tunables?? on PLE machines too.



  static __always_inline void __ticket_spin_lock(arch_spinlock_t *lock)
  {
 register struct __raw_tickets inc = { .tail = 1 };
+   unsigned int timeout = 0;
+   __ticket_t current_head;

 inc = xadd(&lock->tickets, inc);
-
+   if (likely(inc.head == inc.tail))
+   goto spin;
+
+   timeout =  TIMEOUT_UNIT * (inc.tail - inc.head);
+   do {
+   current_head = ACCESS_ONCE(lock->tickets.head);
+   if (inc.tail <= current_head) {
+   goto spin;
+   } else if (inc.head != current_head) {
+   inc.head = current_head;
+   timeout =  TIMEOUT_UNIT * (inc.tail - inc.head);


Good idea indeed to base the loop on head and tail difference.. But for 
virtualization I believe this "directly proportional notion" is little 
tricky too.


--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Preemptable Ticket Spinlock

2013-04-21 Thread Raghavendra K T

On 04/22/2013 04:37 AM, Jiannan Ouyang wrote:

On Sun, Apr 21, 2013 at 5:12 PM, Rik van Riel  wrote:

Your algorithm is very clever, and very promising.

However, it does increase the size of the struct spinlock, and adds
an additional atomic operation to spin_unlock, neither of which I
suspect are necessary.

If we always incremented the ticket number by 2 (instead of 1), then
we could use the lower bit of the ticket number as the spinlock.

If we do NOT run virtualized, we simply increment the ticket by 2
in spin_unlock, and the code can remain otherwise the same.

If we do run virtualized, we take that spinlock after acquiring
the ticket (or timing out), just like in your code. In the
virtualized spin_unlock, we can then release the spinlock and
increment the ticket in one operation: by simply increasing the
ticket by 1.

In other words, we should be able to keep the overhead of this
to an absolute minimum, and keep spin_unlock to be always the
same cost it is today.

--
All rights reversed


Hi Rik,

Thanks for your feedback.

Yes I agree with you
- increase the size of struct spinlock is unnecessary
- your idea of utilize the lower bit and save one atomic operation
from unlock is cool!



Yes, +1. it is indeed a cool idea. Thanks to Jeremy.. and as Rik already 
mentioned it would also prevent the side effects of increasing
lock size. (It reminds my thought of encoding vcpuid in lock for pv 
spinlock)



I can come up with a updated patch soon.

--Jiannan




--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Preemptable Ticket Spinlock

2013-04-22 Thread Peter Zijlstra
On Sun, 2013-04-21 at 17:12 -0400, Rik van Riel wrote:
> 
> If we always incremented the ticket number by 2 (instead of 1), then
> we could use the lower bit of the ticket number as the spinlock.

ISTR that paravirt ticket locks already do that and use the lsb to
indicate the unlock needs to perform wakeups.

Also, since all of this is virt nonsense, shouldn't it live in the
paravirt ticket lock code and leave the native code as is?

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Preemptable Ticket Spinlock

2013-04-22 Thread Rik van Riel

On 04/22/2013 07:51 AM, Peter Zijlstra wrote:

On Sun, 2013-04-21 at 17:12 -0400, Rik van Riel wrote:


If we always incremented the ticket number by 2 (instead of 1), then
we could use the lower bit of the ticket number as the spinlock.


ISTR that paravirt ticket locks already do that and use the lsb to
indicate the unlock needs to perform wakeups.

Also, since all of this is virt nonsense, shouldn't it live in the
paravirt ticket lock code and leave the native code as is?


Sure, but that is still no reason not to have the virt
implementation be as fast as possible, and share the same
data type as the non-virt implementation.

Also, is it guaranteed that the native spin_lock code has
not been called yet before we switch over to the paravirt
functions?

If the native spin_lock code has been called already at
that time, the native code would still need to be modified
to increment the ticket number by 2, so we end up with a
compatible value in each spin lock's .tickets field, and
prevent a deadlock after we switch over to the paravirt
variant.

--
All rights reversed
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Preemptable Ticket Spinlock

2013-04-22 Thread Jiannan Ouyang
On Mon, Apr 22, 2013 at 1:58 AM, Raghavendra K T
 wrote:

>  The intuition is to
>>
>> downgrade a fair lock to an unfair lock automatically upon preemption,
>> and preserve the fairness otherwise.
>
>
> I also hope being little unfair, does not affect the original intention
> of introducing ticket spinlocks too.
> Some discussions were here long back in this thead,
> https://lkml.org/lkml/2010/6/3/331
>

Good point. I also have the question that why not use unfair lock
under virtual environment,
and is fairness really a big issue. However, given that current kernel
is using ticket lock, I
assume fairness is a necessary spinlock feature.

Regard the fairness of preemptable-lock, I did a user space experiment
using 8 pCPU to
compete on one spinlock, and count the lock acquisition times. Results
show that lock acquisition
counts are *almost* evenly distributed between threads in preemptable-lock.

>
> AFAIU, the experiments are on non PLE machines and it would be worth
> experimenting on PLE machines too. and also bigger machines.
> (we may get some surprises there otherwise).
> 'll wait for your next iteration of the patches with "using lower bit"
> changes.
>

Yes, they are on PLE machines. Current implementation and evaluation
is still at the stage of
concept proving. More experiments (with PLE, bigger machiens, etc) are
needed to better understand
the lock behavior.

>> +#define TIMEOUT_UNIT (1<<14)
>
>
> This value seem to be at the higher end. But I hope you have experimented
> enough to come up with this. Better again to test all these tunables?? on
> PLE machines too.
>
>
I actually didn't tune this parameter at all... But yes, find a better
value is necessary if it is in industry code.

>>   static __always_inline void __ticket_spin_lock(arch_spinlock_t *lock)
>>   {
>>  register struct __raw_tickets inc = { .tail = 1 };
>> +   unsigned int timeout = 0;
>> +   __ticket_t current_head;
>>
>>  inc = xadd(&lock->tickets, inc);
>> -
>> +   if (likely(inc.head == inc.tail))
>> +   goto spin;
>> +
>> +   timeout =  TIMEOUT_UNIT * (inc.tail - inc.head);
>> +   do {
>> +   current_head = ACCESS_ONCE(lock->tickets.head);
>> +   if (inc.tail <= current_head) {
>> +   goto spin;
>> +   } else if (inc.head != current_head) {
>> +   inc.head = current_head;
>> +   timeout =  TIMEOUT_UNIT * (inc.tail - inc.head);
>
>
> Good idea indeed to base the loop on head and tail difference.. But for
> virtualization I believe this "directly proportional notion" is little
> tricky too.
>

Could you explain your concern a little bit more?

Thanks
--Jiannan
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Preemptable Ticket Spinlock

2013-04-22 Thread Peter Zijlstra
On Mon, 2013-04-22 at 08:52 -0400, Rik van Riel wrote:
> On 04/22/2013 07:51 AM, Peter Zijlstra wrote:
> > On Sun, 2013-04-21 at 17:12 -0400, Rik van Riel wrote:
> >>
> >> If we always incremented the ticket number by 2 (instead of 1), then
> >> we could use the lower bit of the ticket number as the spinlock.
> >
> > ISTR that paravirt ticket locks already do that and use the lsb to
> > indicate the unlock needs to perform wakeups.
> >
> > Also, since all of this is virt nonsense, shouldn't it live in the
> > paravirt ticket lock code and leave the native code as is?
> 
> Sure, but that is still no reason not to have the virt
> implementation be as fast as possible, and share the same
> data type as the non-virt implementation.

It has to share the same data-type..

> Also, is it guaranteed that the native spin_lock code has
> not been called yet before we switch over to the paravirt
> functions?
> 
> If the native spin_lock code has been called already at
> that time, the native code would still need to be modified
> to increment the ticket number by 2, so we end up with a
> compatible value in each spin lock's .tickets field, and
> prevent a deadlock after we switch over to the paravirt
> variant.

I thought the stuff already made it upstream, but apparently not; the
lastest posting I'm aware of is here:

  https://lkml.org/lkml/2012/5/2/105

That stuff changes the normal ticket increment as well.. 

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Preemptable Ticket Spinlock

2013-04-22 Thread Rik van Riel

On 04/22/2013 03:49 PM, Peter Zijlstra wrote:

On Mon, 2013-04-22 at 08:52 -0400, Rik van Riel wrote:



If the native spin_lock code has been called already at
that time, the native code would still need to be modified
to increment the ticket number by 2, so we end up with a
compatible value in each spin lock's .tickets field, and
prevent a deadlock after we switch over to the paravirt
variant.


I thought the stuff already made it upstream, but apparently not; the
lastest posting I'm aware of is here:

   https://lkml.org/lkml/2012/5/2/105

That stuff changes the normal ticket increment as well..


Jiannan,

It looks like the patch above could make a good patch
1 (or 2) in your patch series :)

--
All rights reversed
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Preemptable Ticket Spinlock

2013-04-22 Thread Jiannan Ouyang
On Mon, Apr 22, 2013 at 3:56 PM, Rik van Riel  wrote:
> On 04/22/2013 03:49 PM, Peter Zijlstra wrote:
>>
>> On Mon, 2013-04-22 at 08:52 -0400, Rik van Riel wrote:
>
>
>>> If the native spin_lock code has been called already at
>>> that time, the native code would still need to be modified
>>> to increment the ticket number by 2, so we end up with a
>>> compatible value in each spin lock's .tickets field, and
>>> prevent a deadlock after we switch over to the paravirt
>>> variant.
>>
>>
>> I thought the stuff already made it upstream, but apparently not; the
>> lastest posting I'm aware of is here:
>>
>>https://lkml.org/lkml/2012/5/2/105
>>
>> That stuff changes the normal ticket increment as well..
>
>
> Jiannan,
>
> It looks like the patch above could make a good patch
> 1 (or 2) in your patch series :)
>
> --
> All rights reversed

Yes.
I'm going to move my code, updated with Rik's suggestions, to paravirt
ops based on Jeremy's patch.
I'll post a new patch series soon.

Thanks to everyone for the great feedback!
--Jiannan
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Preemptable Ticket Spinlock

2013-04-22 Thread Peter Zijlstra
On Mon, 2013-04-22 at 15:56 -0400, Rik van Riel wrote:
> On 04/22/2013 03:49 PM, Peter Zijlstra wrote:
> > On Mon, 2013-04-22 at 08:52 -0400, Rik van Riel wrote:
> 
> >> If the native spin_lock code has been called already at
> >> that time, the native code would still need to be modified
> >> to increment the ticket number by 2, so we end up with a
> >> compatible value in each spin lock's .tickets field, and
> >> prevent a deadlock after we switch over to the paravirt
> >> variant.
> >
> > I thought the stuff already made it upstream, but apparently not; the
> > lastest posting I'm aware of is here:
> >
> >https://lkml.org/lkml/2012/5/2/105
> >
> > That stuff changes the normal ticket increment as well..
> 
> Jiannan,
> 
> It looks like the patch above could make a good patch
> 1 (or 2) in your patch series :)

I much prefer the entire series from Jeremy since it maintains the
ticket semantics and doesn't degrade the lock to unfair under
contention.

Now I suppose there's a reason its not been merged yet and I suspect
its !paravirt hotpath impact which wasn't rightly justified or somesuch
so maybe someone can work on that or so.. dunno.


--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Preemptable Ticket Spinlock

2013-04-22 Thread Rik van Riel

On 04/22/2013 04:08 PM, Peter Zijlstra wrote:

On Mon, 2013-04-22 at 15:56 -0400, Rik van Riel wrote:

On 04/22/2013 03:49 PM, Peter Zijlstra wrote:

On Mon, 2013-04-22 at 08:52 -0400, Rik van Riel wrote:



If the native spin_lock code has been called already at
that time, the native code would still need to be modified
to increment the ticket number by 2, so we end up with a
compatible value in each spin lock's .tickets field, and
prevent a deadlock after we switch over to the paravirt
variant.


I thought the stuff already made it upstream, but apparently not; the
lastest posting I'm aware of is here:

https://lkml.org/lkml/2012/5/2/105

That stuff changes the normal ticket increment as well..


Jiannan,

It looks like the patch above could make a good patch
1 (or 2) in your patch series :)


I much prefer the entire series from Jeremy since it maintains the
ticket semantics and doesn't degrade the lock to unfair under
contention.

Now I suppose there's a reason its not been merged yet and I suspect
its !paravirt hotpath impact which wasn't rightly justified or somesuch
so maybe someone can work on that or so.. dunno.


IIRC one of the reasons was that the performance improvement wasn't
as obvious.  Rescheduling VCPUs takes a fair amount of time, quite
probably more than the typical hold time of a spinlock.

--
All rights reversed
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Preemptable Ticket Spinlock

2013-04-22 Thread Peter Zijlstra
On Mon, 2013-04-22 at 16:32 -0400, Rik van Riel wrote:
> 
> IIRC one of the reasons was that the performance improvement wasn't
> as obvious.  Rescheduling VCPUs takes a fair amount of time, quite
> probably more than the typical hold time of a spinlock.

IIRC it would spin for a while before blocking..

/me goes re-read some of that thread...

Ah, its because PLE is curing most of it.. !PLE it had huge gains but
apparently nobody cares about !PLE hardware anymore :-)

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Preemptable Ticket Spinlock

2013-04-22 Thread Jiannan Ouyang
On Mon, Apr 22, 2013 at 4:08 PM, Peter Zijlstra  wrote:

>
> I much prefer the entire series from Jeremy since it maintains the
> ticket semantics and doesn't degrade the lock to unfair under
> contention.
>
> Now I suppose there's a reason its not been merged yet and I suspect
> its !paravirt hotpath impact which wasn't rightly justified or somesuch
> so maybe someone can work on that or so.. dunno.
>
>

In my paper, I comparable preemptable-lock and pv_lock on KVM from
Raghu and Jeremy.
Results show that:
- preemptable-lock improves performance significantly without paravirt support
- preemptable-lock can also be paravirtualized, which outperforms
pv_lock, especially when overcommited by 3 or more
- pv-preemptable-lock has much less performance variance compare to
pv_lock, because it adapts to preemption within  VM,
  other than using rescheduling that increase VM interference

It would still be very interesting to conduct more experiments to
compare these two, to see if the fairness enforced by pv_lock is
mandatory, and if preemptable-lock outperforms pv_lock in most cases,
and how do they work with PLE.

--Jiannan
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Preemptable Ticket Spinlock

2013-04-22 Thread Peter Zijlstra
On Mon, 2013-04-22 at 22:44 +0200, Peter Zijlstra wrote:
> On Mon, 2013-04-22 at 16:32 -0400, Rik van Riel wrote:
> > 
> > IIRC one of the reasons was that the performance improvement wasn't
> > as obvious.  Rescheduling VCPUs takes a fair amount of time, quite
> > probably more than the typical hold time of a spinlock.
> 
> IIRC it would spin for a while before blocking..
> 
> /me goes re-read some of that thread...
> 
> Ah, its because PLE is curing most of it.. !PLE it had huge gains but
> apparently nobody cares about !PLE hardware anymore :-)

Hmm.. it looked like under light overcommit the paravirt ticket lock
still had some gain (~10%) and of course it brings the fairness thing
which is always good.

I can only imagine the mess unfair + vcpu preemption can bring to guest
tasks.

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Preemptable Ticket Spinlock

2013-04-22 Thread Rik van Riel

On 04/22/2013 04:46 PM, Jiannan Ouyang wrote:


It would still be very interesting to conduct more experiments to
compare these two, to see if the fairness enforced by pv_lock is
mandatory, and if preemptable-lock outperforms pv_lock in most cases,
and how do they work with PLE.


Given the fairly high cost of rescheduling a VCPU (which is likely
to include an IPI), versus the short hold time of most spinlocks,
I have the strong suspicion that your approach would win.

The fairness is only compromised in a limited way and in certain
circumstances, so I am not too worried about that.

--
All rights reversed
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Preemptable Ticket Spinlock

2013-04-22 Thread Jiannan Ouyang
On Mon, Apr 22, 2013 at 4:44 PM, Peter Zijlstra  wrote:
> On Mon, 2013-04-22 at 16:32 -0400, Rik van Riel wrote:
>>
>> IIRC one of the reasons was that the performance improvement wasn't
>> as obvious.  Rescheduling VCPUs takes a fair amount of time, quite
>> probably more than the typical hold time of a spinlock.
>
> IIRC it would spin for a while before blocking..
>
> /me goes re-read some of that thread...
>
> Ah, its because PLE is curing most of it.. !PLE it had huge gains but
> apparently nobody cares about !PLE hardware anymore :-)
>

For now, I don't know how good it can work with PLE. But I think it
should save the time of VMEXIT on PLE machine.
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Preemptable Ticket Spinlock

2013-04-22 Thread Rik van Riel

On 04/22/2013 04:48 PM, Peter Zijlstra wrote:


Hmm.. it looked like under light overcommit the paravirt ticket lock
still had some gain (~10%) and of course it brings the fairness thing
which is always good.

I can only imagine the mess unfair + vcpu preemption can bring to guest
tasks.


If you think unfairness + vcpu preemption is bad, you haven't
tried full fairness + vcpu preemption :)

--
All rights reversed
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Preemptable Ticket Spinlock

2013-04-22 Thread Chegu Vinod

On 4/22/2013 1:50 PM, Jiannan Ouyang wrote:

On Mon, Apr 22, 2013 at 4:44 PM, Peter Zijlstra  wrote:

On Mon, 2013-04-22 at 16:32 -0400, Rik van Riel wrote:

IIRC one of the reasons was that the performance improvement wasn't
as obvious.  Rescheduling VCPUs takes a fair amount of time, quite
probably more than the typical hold time of a spinlock.

IIRC it would spin for a while before blocking..

/me goes re-read some of that thread...

Ah, its because PLE is curing most of it.. !PLE it had huge gains but
apparently nobody cares about !PLE hardware anymore :-)


For now, I don't know how good it can work with PLE. But I think it
should save the time of VMEXIT on PLE machine.
.

Thanks for sharing your patch. 'am waiting for your v2 patch(es) and 
then let you any review feedback. Hoping to verify your changes on a 
large box (PLE enabled) and get back to you with some data...


Thanks
Vinod
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Preemptable Ticket Spinlock

2013-04-22 Thread Peter Zijlstra
On Mon, 2013-04-22 at 16:46 -0400, Jiannan Ouyang wrote:
> On Mon, Apr 22, 2013 at 4:08 PM, Peter Zijlstra  wrote:
> 
> >
> > I much prefer the entire series from Jeremy since it maintains the
> > ticket semantics and doesn't degrade the lock to unfair under
> > contention.
> >
> > Now I suppose there's a reason its not been merged yet and I suspect
> > its !paravirt hotpath impact which wasn't rightly justified or somesuch
> > so maybe someone can work on that or so.. dunno.
> >
> >
> 
> In my paper, I comparable preemptable-lock and pv_lock on KVM from
> Raghu and Jeremy.

Which pv_lock? The current pv spinlock mess is basically the old unfair
thing. The later patch series I referred to earlier implemented a
paravirt ticket lock, that should perform much better under overcommit.

> Results show that:
> - preemptable-lock improves performance significantly without paravirt support

But completely wrecks our native spinlock implementation so that's not
going to happen of course ;-)

> - preemptable-lock can also be paravirtualized, which outperforms
> pv_lock, especially when overcommited by 3 or more

See above.. 

> - pv-preemptable-lock has much less performance variance compare to
> pv_lock, because it adapts to preemption within  VM,
>   other than using rescheduling that increase VM interference

I would say it has a _much_ worse worst case (and thus worse variance)
than the paravirt ticket implementation from Jeremy. While full
paravirt ticket lock results in vcpu scheduling it does maintain
fairness.

If you drop strict fairness you can end up in unbounded starvation
cases and those are very ugly indeed.

> It would still be very interesting to conduct more experiments to
> compare these two, to see if the fairness enforced by pv_lock is
> mandatory, and if preemptable-lock outperforms pv_lock in most cases,
> and how do they work with PLE.

Be more specific, pv_lock as currently upstream is a trainwreck mostly
done because pure ticket spinners and vcpu-preemption are even worse.

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Preemptable Ticket Spinlock

2013-04-22 Thread Peter Zijlstra
On Mon, 2013-04-22 at 16:49 -0400, Rik van Riel wrote:
> Given the fairly high cost of rescheduling a VCPU (which is likely
> to include an IPI), versus the short hold time of most spinlocks,
> I have the strong suspicion that your approach would win.

  https://lkml.org/lkml/2012/5/2/101

If you schedule too often your SPIN_THRESHOLD is far too low.

Anyway.. performance can't be that bad, otherwise Jeremey would have
spend as much time on it as he did.

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Preemptable Ticket Spinlock

2013-04-22 Thread Jiannan Ouyang
On Mon, Apr 22, 2013 at 4:55 PM, Peter Zijlstra  wrote:

>
> Which pv_lock? The current pv spinlock mess is basically the old unfair
> thing. The later patch series I referred to earlier implemented a
> paravirt ticket lock, that should perform much better under overcommit.
>

Yes, it is a paravirt *ticket* spinck. I got the patch from
Raghavendra K T through email
http://lwn.net/Articles/495597/
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Preemptable Ticket Spinlock

2013-04-22 Thread Andi Kleen
Rik van Riel  writes:
>
> If we always incremented the ticket number by 2 (instead of 1), then
> we could use the lower bit of the ticket number as the spinlock.

Spinning on a single bit is very inefficient, as you need to do
try lock in a loop which is very unfriendly to the MESI state protocol.
It's much better to have at least three states and allow
spinning-while-reading-only.

This is typically very visible on systems with >2S.

-Andi

-- 
a...@linux.intel.com -- Speaking for myself only
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Preemptable Ticket Spinlock

2013-04-22 Thread Rik van Riel

On 04/22/2013 04:55 PM, Peter Zijlstra wrote:

On Mon, 2013-04-22 at 16:46 -0400, Jiannan Ouyang wrote:



- pv-preemptable-lock has much less performance variance compare to
pv_lock, because it adapts to preemption within  VM,
   other than using rescheduling that increase VM interference


I would say it has a _much_ worse worst case (and thus worse variance)
than the paravirt ticket implementation from Jeremy. While full
paravirt ticket lock results in vcpu scheduling it does maintain
fairness.

If you drop strict fairness you can end up in unbounded starvation
cases and those are very ugly indeed.


If needed, Jiannan's scheme could easily be bounded to prevent
infinite starvation. For example, we could allow only the first
8 CPUs in line to jump the queue.

However, given the way that virtual CPUs get scheduled in and
out all the time, I suspect starvation is not a worry, and we
will not need the additional complexity to deal with it.

You may want to play around with virtualization a bit, to get
a feel for how things work in virt land.

--
All rights reversed
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Preemptable Ticket Spinlock

2013-04-22 Thread Rik van Riel

On 04/22/2013 05:56 PM, Andi Kleen wrote:

Rik van Riel  writes:


If we always incremented the ticket number by 2 (instead of 1), then
we could use the lower bit of the ticket number as the spinlock.


Spinning on a single bit is very inefficient, as you need to do
try lock in a loop which is very unfriendly to the MESI state protocol.
It's much better to have at least three states and allow
spinning-while-reading-only.

This is typically very visible on systems with >2S.


Absolutely, the spinning should be read-only, until the CPU
sees that the desired bit is clear.  MESI-friendly spinning
is essential.

--
All rights reversed
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Preemptable Ticket Spinlock

2013-04-22 Thread Raghavendra K T

On 04/23/2013 01:19 AM, Peter Zijlstra wrote:

On Mon, 2013-04-22 at 08:52 -0400, Rik van Riel wrote:

On 04/22/2013 07:51 AM, Peter Zijlstra wrote:

On Sun, 2013-04-21 at 17:12 -0400, Rik van Riel wrote:


If we always incremented the ticket number by 2 (instead of 1), then
we could use the lower bit of the ticket number as the spinlock.


ISTR that paravirt ticket locks already do that and use the lsb to
indicate the unlock needs to perform wakeups.

Also, since all of this is virt nonsense, shouldn't it live in the
paravirt ticket lock code and leave the native code as is?


Sure, but that is still no reason not to have the virt
implementation be as fast as possible, and share the same
data type as the non-virt implementation.


It has to share the same data-type..


Also, is it guaranteed that the native spin_lock code has
not been called yet before we switch over to the paravirt
functions?

If the native spin_lock code has been called already at
that time, the native code would still need to be modified
to increment the ticket number by 2, so we end up with a
compatible value in each spin lock's .tickets field, and
prevent a deadlock after we switch over to the paravirt
variant.


I thought the stuff already made it upstream, but apparently not; the
lastest posting I'm aware of is here:

   https://lkml.org/lkml/2012/5/2/105

That stuff changes the normal ticket increment as well..



pv-ticket spinlock went on hold state, after Avi acked because of:

though on non-PLE, we get a huge advantage, on PLE machine the benefit 
was not as impressive (~10% as you stated in email chain) compared to 
the complexity of the patches.

So Avi suggested to try PLE improvements first, so they are going upstream.

https://lkml.org/lkml/2012/7/18/247
https://lkml.org/lkml/2013/1/22/104
https://lkml.org/lkml/2013/2/6/345 (on the way in kvm tree)

Current status of PV spinlock:
I have the rebased patches of pv spinlocks and experimenting with latest 
kernel.I have
Gleb's irq delivery incorporated into the patch series. But I am 
thinknig whether I can

improve some guest side logic in unlock.
I will probably setup a githup and post the link soon.

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Preemptable Ticket Spinlock

2013-04-22 Thread Raghavendra K T

On 04/22/2013 10:12 PM, Jiannan Ouyang wrote:

On Mon, Apr 22, 2013 at 1:58 AM, Raghavendra K T
 wrote:


[...]


   static __always_inline void __ticket_spin_lock(arch_spinlock_t *lock)
   {
  register struct __raw_tickets inc = { .tail = 1 };
+   unsigned int timeout = 0;
+   __ticket_t current_head;

  inc = xadd(&lock->tickets, inc);
-
+   if (likely(inc.head == inc.tail))
+   goto spin;
+
+   timeout =  TIMEOUT_UNIT * (inc.tail - inc.head);


Forgot to mention about this, for immediate wait case,
you can busyloop instead of timeout (I mean

timeout =  TIMEOUT_UNIT * (inc.tail - inc.head -1);

This ideas was used by Rik in his spinlock  backoff patches.


+   do {
+   current_head = ACCESS_ONCE(lock->tickets.head);
+   if (inc.tail <= current_head) {
+   goto spin;
+   } else if (inc.head != current_head) {
+   inc.head = current_head;
+   timeout =  TIMEOUT_UNIT * (inc.tail - inc.head);



Good idea indeed to base the loop on head and tail difference.. But for
virtualization I believe this "directly proportional notion" is little
tricky too.



Could you explain your concern a little bit more?



Consider a big machine with 2 VMs running.
If nth vcpu of say VM1 waiting in the queue, the question is,

Do we have to have all the n VCPU doing busyloop and thus burning
sigma (n*(n+1) * TIMEOUT_UNIT)) ?

OR

Is it that, far off vcpu in the queue worth giving his time back so that 
probably some other vcpu of VM1 doing good work OR vcpu of VM2 can 
benefit from this.


I mean far the vcpu in the queue, let him yield voluntarily. (inversely 
proportional notion just because it is vcpu). and of course for some n < 
THRESHOLD we can still have directly proportional wait idea.


Does this idea sound good ?

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Preemptable Ticket Spinlock

2013-04-22 Thread Raghavendra K T

On 04/23/2013 02:31 AM, Peter Zijlstra wrote:

On Mon, 2013-04-22 at 16:49 -0400, Rik van Riel wrote:

Given the fairly high cost of rescheduling a VCPU (which is likely
to include an IPI), versus the short hold time of most spinlocks,
I have the strong suspicion that your approach would win.


   https://lkml.org/lkml/2012/5/2/101

If you schedule too often your SPIN_THRESHOLD is far too low.

Anyway.. performance can't be that bad, otherwise Jeremey would have
spend as much time on it as he did.


When I experimented last time ideal SPIN_THRESHOLD for PLE machine,
was around 4k, 8k. Jeremy's experiment was on a non-PLE machine AFAIK,
which complemented PLE feature in a nice way with 2k threshold.

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Preemptable Ticket Spinlock

2013-04-22 Thread Gleb Natapov
On Mon, Apr 22, 2013 at 07:08:06PM -0400, Rik van Riel wrote:
> On 04/22/2013 04:55 PM, Peter Zijlstra wrote:
> >On Mon, 2013-04-22 at 16:46 -0400, Jiannan Ouyang wrote:
> 
> >>- pv-preemptable-lock has much less performance variance compare to
> >>pv_lock, because it adapts to preemption within  VM,
> >>   other than using rescheduling that increase VM interference
> >
> >I would say it has a _much_ worse worst case (and thus worse variance)
> >than the paravirt ticket implementation from Jeremy. While full
> >paravirt ticket lock results in vcpu scheduling it does maintain
> >fairness.
> >
> >If you drop strict fairness you can end up in unbounded starvation
> >cases and those are very ugly indeed.
> 
> If needed, Jiannan's scheme could easily be bounded to prevent
> infinite starvation. For example, we could allow only the first
> 8 CPUs in line to jump the queue.
> 
> However, given the way that virtual CPUs get scheduled in and
> out all the time, I suspect starvation is not a worry, and we
> will not need the additional complexity to deal with it.
> 
FWIW RHEL6 uses unfair spinlock when it runs as a guest. We never got
reports about problems due to this on any scale.

> You may want to play around with virtualization a bit, to get
> a feel for how things work in virt land.
> 
> -- 
> All rights reversed

--
Gleb.
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Preemptable Ticket Spinlock

2013-04-26 Thread Andrew Theurer
On Sat, 2013-04-20 at 18:12 -0400, Jiannan Ouyang wrote:
> Hello Everyone,
> 
> 
> I recently came up with a spinlock algorithm that can adapt to
> preemption, which you may be interested in. The intuition is to
> downgrade a fair lock to an unfair lock automatically upon preemption,
> and preserve the fairness otherwise. It is a guest side optimization,
> and can be used as a complementary technique to host side
> optimizations like co-scheduling and Pause-Loop Exiting.
> 
> 
> In my experiments, it improves VM performance by 5:32X on average,
> when running on a non paravirtual VMM, and by 7:91X when running on a
> VMM that supports a paravirtual locking interface (using a pv
> preemptable ticket spinlock), when executing a set of microbenchmarks
> as well as a realistic e-commerce benchmark.
> 
> 
> A detailed algorithm description can be found in my VEE 2013 paper,
> Preemptable Ticket Spinlocks: Improving Consolidated Performance in
> the Cloud
> Jiannan Ouyang, John R. Lange
> ouyang,jackla...@cs.pitt.edu
> University of Pittsburgh
> http://people.cs.pitt.edu/~ouyang/files/publication/preemptable_lock-ouyang-vee13.pdf
> 
> 
> The patch is based on stock Linux kernel 3.5.0, and tested on kernel
> 3.4.41 as well.
> http://www.cs.pitt.edu/~ouyang/files/preemptable_lock.tar.gz

Very nice paper.  I wanted to see how this would work on larger VMs on
the dbench workload.  Unfortunately, when I tried your patch on 3.9-rc8
+, I get a lot of CPU soft lockup messages from the VMs, to the point
where the test cannot complete in a reasonable amount of time:


> [ 2144.672812] BUG: soft lockup - CPU#16 stuck for 23s! [dbench:8618]
> [ 2144.672888] Modules linked in: bridge stp llc target_core_mod configfs 
> autofs4 sunrpc af_packet ipv6 binfmt_misc dm_mirror dm_region_hash dm_log 
> dm_mod uinput rtc_cmos button crc32c_intel microcode pcspkr virtio_net 
> i2c_piix4 i2c_core intel_agp intel_gtt ext4 mbcache jbd2 crc16 virtio_blk 
> floppy aesni_intel ablk_helper cryptd lrw aes_x86_64 xts gf128mul virtio_pci 
> virtio_ring virtio uhci_hcd usbcore usb_common pata_acpi ata_generic piix 
> ide_core ata_piix libata scsi_mod [last unloaded: mperf]
> [ 2144.672892] CPU 16
> [ 2144.672892] Pid: 8618, comm: dbench Not tainted 
> 3.9.0-rc8-soft-ticket-0.27-default+ #3 Bochs Bochs
> [ 2144.672898] RIP: 0010:[]  [] 
> _raw_spin_unlock_irqrestore+0x13/0x20
> [ 2144.672899] RSP: 0018:8807c0203d68  EFLAGS: 0202
> [ 2144.672901] RAX:  RBX: 8807c0203ce8 RCX: 
> 8807bfe13244
> [ 2144.672902] RDX: 0001 RSI: 0202 RDI: 
> 0202
> [ 2144.672903] RBP: 8807c0203d68 R08: 1774 R09: 
> 1777
> [ 2144.672904] R10: 0001 R11: 00ef2400 R12: 
> 8807c0203cd8
> [ 2144.672906] R13: 814aa09d R14: 8807c0203d68 R15: 
> 
> [ 2144.672907] FS:  7f42edc22700() GS:8807c020() 
> knlGS:
> [ 2144.672908] CS:  0010 DS:  ES:  CR0: 80050033
> [ 2144.672908] CR2: 7f021f6bdc30 CR3: 00079c63b000 CR4: 
> 06e0
> [ 2144.673131] DR0:  DR1:  DR2: 
> 
> [ 2144.673182] DR3:  DR6: 0ff0 DR7: 
> 0400
> [ 2144.673184] Process dbench (pid: 8618, threadinfo 8807a0394000, task 
> 880790c22340)
> [ 2144.673185] Stack:
> [ 2144.673189]  8807c0203e68 81086713 0092 
> 00013240
> [ 2144.673295]  00013240 00100092 0025 
> 8807c0203ea4
> [ 2144.673300]  8807bf80a280 00018108265b 8807a0c03c00 
> 0001
> [ 2144.673300] Call Trace:
> [ 2144.673302]  
> [ 2144.673305]  [] load_balance+0x543/0x630
> [ 2144.673309]  [] rebalance_domains+0x9d/0x180
> [ 2144.673311]  [] run_rebalance_domains+0x44/0x60
> [ 2144.673315]  [] __do_softirq+0xd6/0x250
> [ 2144.673318]  [] irq_exit+0xb5/0xc0
> [ 2144.673322]  [] smp_apic_timer_interrupt+0x69/0xa0
> [ 2144.673325]  [] apic_timer_interrupt+0x6d/0x80
> [ 2144.673327]  
> [ 2144.673329]  [] ? _raw_spin_lock+0x66/0x80
> [ 2144.673331]  [] path_get+0x26/0x40
> [ 2144.673334]  [] unlazy_walk+0x10a/0x230
> [ 2144.673337]  [] lookup_fast+0x229/0x2d0
> [ 2144.673340]  [] path_lookupat+0x123/0x720
> [ 2144.673342]  [] ? inode_permission+0x13/0x50
> [ 2144.673344]  [] ? link_path_walk+0x78/0x450
> [ 2144.673434]  [] filename_lookup+0x2f/0xc0
> [ 2144.673438]  [] user_path_at_empty+0x54/0xa0
> [ 2144.673441]  [] ? group_send_sig_info+0x21/0x60
> [ 2144.673444]  [] ? kill_pid_info+0x3a/0x60
> [ 2144.673523]  [] user_path_at+0xc/0x10
> [ 2144.673529]  [] vfs_fstatat+0x51/0xb0
> [ 2144.673532]  [] vfs_stat+0x16/0x20
> [ 2144.673534]  [] sys_newstat+0x1f/0x50
> [ 2144.673538]  [] ? __audit_syscall_exit+0x246/0x2f0
> [ 2144.673541]  [] ? __audit_syscall_entry+0x8c/0xf0
> [ 2144.673543]  [] system_call_fastpath+0x16/0x1b

This is on a 40 core / 80 thread Westmere-EX with 16 V

Re: Preemptable Ticket Spinlock

2013-05-30 Thread Raghavendra K T

On 04/23/2013 07:12 AM, Raghavendra K T wrote:

On 04/23/2013 01:19 AM, Peter Zijlstra wrote:

On Mon, 2013-04-22 at 08:52 -0400, Rik van Riel wrote:

On 04/22/2013 07:51 AM, Peter Zijlstra wrote:

On Sun, 2013-04-21 at 17:12 -0400, Rik van Riel wrote:


If we always incremented the ticket number by 2 (instead of 1), then
we could use the lower bit of the ticket number as the spinlock.


ISTR that paravirt ticket locks already do that and use the lsb to
indicate the unlock needs to perform wakeups.

Also, since all of this is virt nonsense, shouldn't it live in the
paravirt ticket lock code and leave the native code as is?


Sure, but that is still no reason not to have the virt
implementation be as fast as possible, and share the same
data type as the non-virt implementation.


It has to share the same data-type..


Also, is it guaranteed that the native spin_lock code has
not been called yet before we switch over to the paravirt
functions?

If the native spin_lock code has been called already at
that time, the native code would still need to be modified
to increment the ticket number by 2, so we end up with a
compatible value in each spin lock's .tickets field, and
prevent a deadlock after we switch over to the paravirt
variant.


I thought the stuff already made it upstream, but apparently not; the
lastest posting I'm aware of is here:

   https://lkml.org/lkml/2012/5/2/105

That stuff changes the normal ticket increment as well..



pv-ticket spinlock went on hold state, after Avi acked because of:

though on non-PLE, we get a huge advantage, on PLE machine the benefit
was not as impressive (~10% as you stated in email chain) compared to
the complexity of the patches.
So Avi suggested to try PLE improvements first, so they are going upstream.

https://lkml.org/lkml/2012/7/18/247
https://lkml.org/lkml/2013/1/22/104
https://lkml.org/lkml/2013/2/6/345 (on the way in kvm tree)

Current status of PV spinlock:
I have the rebased patches of pv spinlocks and experimenting with latest
kernel.I have
Gleb's irq delivery incorporated into the patch series. But I am
thinknig whether I can
improve some guest side logic in unlock.
I will probably setup a githup and post the link soon.


Sorry for late reply.

Here is the branch with pvpspinlock V9 version in github reabsed to  3.10-rc

https://github.com/ktraghavendra/linux/tree/pvspinlock_v9

planning post a formal email in a separate thread with link a to this
branch (instead of spamming with 19 patches)

Main changes w.r.t v8 are
- Changed spin_threshold to 32k to avoid excess halt exits that are 
causing undercommit degradation (after PLE handler improvement).

- Added  kvm_irq_delivery_to_apic (suggested by Gleb)
- optimized halt exit path to use PLE handler

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Preemptable Ticket Spinlock

2013-05-30 Thread Thomas Gleixner
On Thu, 30 May 2013, Raghavendra K T wrote:
> Here is the branch with pvpspinlock V9 version in github reabsed to  3.10-rc
> 
> https://github.com/ktraghavendra/linux/tree/pvspinlock_v9
> 
> planning post a formal email in a separate thread with link a to this
> branch (instead of spamming with 19 patches)

19 patches is not really spam if you compare it to the total number of
mails per day on LKML. 

The git tree is nice for people who want to test stuff easily, but if
you want people to review and comment patches, then please use mail.

Thanks,

tglx
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html