Re: [PATCH v10 03/19] qspinlock: Add pending bit

2014-05-19 Thread Waiman Long

On 05/14/2014 03:13 PM, Radim Krčmář wrote:

2014-05-14 19:00+0200, Peter Zijlstra:

On Wed, May 14, 2014 at 06:51:24PM +0200, Radim Krčmář wrote:

Ok.
I've seen merit in pvqspinlock even with slightly slower first-waiter,
so I would have happily sacrificed those horrible branches.
(I prefer elegant to optimized code, but I can see why we want to be
  strictly better than ticketlock.)
Peter mentioned that we are focusing on bare-metal patches, so I'll
withold my other paravirt rants until they are polished.

(It was an ambiguous sentence, I have comments for later patches.)


Well, paravirt must happen too, but comes later in this series, patch 3
which we're replying to is still very much in the bare metal part of the
series.

(I think that bare metal spans the first 7 patches.)


I've not had time yet to decode all that Waiman has done to make
paravirt work.

But as a general rule I like patches that start with something simple
and working and then optimize it, this series doesn't seem to quite
grasp that.


And to forcefully bring this thread a little bit on-topic:

Pending-bit is effectively a lock in a lock, so I was wondering why
don't we use more pending bits; advantages are the same, just diminished
by the probability of having an ideally contended lock:
  - waiter won't be blocked on RAM access if critical section (or more)
ends sooner
  - some unlucky cacheline is not forgotten
  - faster unlock (no need for tail operations)
(- ?)
disadvantages are magnified:
  - increased complexity
  - intense cacheline sharing
(I thought that this is the main disadvantage of ticketlock.)
(- ?)

One bit still improved performance, is it the best we got?

So, the advantage of one bit is that if we use a whole byte for 1 bit we
can avoid some atomic ops.

The entire reason for this in-word spinner is to amortize the cost of
hitting the external node cacheline.

Every pending CPU removes one length of the critical section from the
delay caused by cacheline propagation and really cold cache is
hundreds(?) of cycles, so we could burn some to ensure correctness and
still be waiting when the first pending CPU unlocks.


Assuming that taking a spinlock is fairly frequent in the kernel, the 
node structure cacheline won't be so cold after all.



So traditional locks like test-and-test and the ticket lock only ever
access the spinlock word itsef, this MCS style queueing lock has a
second (and, see my other rants in this thread, when done wrong more
than 2) cacheline to touch.

That said, all our benchmarking is pretty much for the cache-hot case,
so I'm not entirely convinced yet that the one pending bit makes up for
it, it does in the cache-hot case.

Yeah, we probably use the faster pre-lock quite a lot.
Cover letter states that queue depth 1-3 is a bit slower than ticket
spinlock, so we might not be losing if we implemented a faster
in-word-lock of this capacity.  (Not that I'm a fan of the hybrid lock.)


I had tried an experimental patch with 2 pending bits. However, the 
benchmark test that I used show the performance is even worse than 
without any pending bit. I probably need to revisit that later as to why 
this is the case. As for now, I will focus on just having one pending 
bit. If we could find a way to get better performance out of more than 1 
pending bit later on, we could always submit another patch to do that.



But... writing cache-cold benchmarks is _hard_ :/

Wouldn't clflush of the second cacheline before trying for the lock give
us a rough estimate?


clflush is a very expensive operation and I doubt that it will be 
indicative of real life performance at all. BTW, there is no way to 
write a cache-cold benchmark for that 2nd cacheline as any call to 
spin_lock will likely to access it if there is enough contention.


-Longman

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v10 03/19] qspinlock: Add pending bit

2014-05-14 Thread Radim Krčmář
2014-05-13 15:47-0400, Waiman Long:
 On 05/12/2014 11:22 AM, Radim Krčmář wrote:
 I think there is an unwanted scenario on virtual machines:
 1) VCPU sets the pending bit and start spinning.
 2) Pending VCPU gets descheduled.
  - we have PLE and lock holder isn't running [1]
  - the hypervisor randomly preempts us
 3) Lock holder unlocks while pending VCPU is waiting in queue.
 4) Subsequent lockers will see free lock with set pending bit and will
 loop in trylock's 'for (;;)'
  - the worst-case is lock starving [2]
  - PLE can save us from wasting whole timeslice
 
 Retry threshold is the easiest solution, regardless of its ugliness [4].
 
 Yes, that can be a real issue. Some sort of retry threshold, as you said,
 should be able to handle it.
 
 BTW, the relevant patch should be 16/19 where the PV spinlock stuff should
 be discussed. This patch is perfectly fine.

Ouch, my apology to Peter didn't make it ... Agreed, I should have split
the comment under patches
 [06/19] (part quoted above; does not depend on PV),
 [16/19] (part quoted below) and
 [17/19] (general doubts).

 Another minor design flaw is that formerly first VCPU gets appended to
 the tail when it decides to queue;
 is the performance gain worth it?
 
 Thanks.
 
 Yes, the performance gain is worth it. The primary goal is to be not worse
 than ticket spinlock in light load situation which is the most common case.
 This feature is need to achieve that.

Ok.
I've seen merit in pvqspinlock even with slightly slower first-waiter,
so I would have happily sacrificed those horrible branches.
(I prefer elegant to optimized code, but I can see why we want to be
 strictly better than ticketlock.)
Peter mentioned that we are focusing on bare-metal patches, so I'll
withold my other paravirt rants until they are polished.

And to forcefully bring this thread a little bit on-topic:

Pending-bit is effectively a lock in a lock, so I was wondering why
don't we use more pending bits; advantages are the same, just diminished
by the probability of having an ideally contended lock:
 - waiter won't be blocked on RAM access if critical section (or more)
   ends sooner
 - some unlucky cacheline is not forgotten
 - faster unlock (no need for tail operations)
(- ?)
disadvantages are magnified:
 - increased complexity
 - intense cacheline sharing
   (I thought that this is the main disadvantage of ticketlock.)
(- ?)

One bit still improved performance, is it the best we got?

Thanks.
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v10 03/19] qspinlock: Add pending bit

2014-05-14 Thread Peter Zijlstra
On Wed, May 14, 2014 at 06:51:24PM +0200, Radim Krčmář wrote:
 Ok.
 I've seen merit in pvqspinlock even with slightly slower first-waiter,
 so I would have happily sacrificed those horrible branches.
 (I prefer elegant to optimized code, but I can see why we want to be
  strictly better than ticketlock.)
 Peter mentioned that we are focusing on bare-metal patches, so I'll
 withold my other paravirt rants until they are polished.

Well, paravirt must happen too, but comes later in this series, patch 3
which we're replying to is still very much in the bare metal part of the
series.

I've not had time yet to decode all that Waiman has done to make
paravirt work.

But as a general rule I like patches that start with something simple
and working and then optimize it, this series doesn't seem to quite
grasp that.

 And to forcefully bring this thread a little bit on-topic:
 
 Pending-bit is effectively a lock in a lock, so I was wondering why
 don't we use more pending bits; advantages are the same, just diminished
 by the probability of having an ideally contended lock:
  - waiter won't be blocked on RAM access if critical section (or more)
ends sooner
  - some unlucky cacheline is not forgotten
  - faster unlock (no need for tail operations)
 (- ?)
 disadvantages are magnified:
  - increased complexity
  - intense cacheline sharing
(I thought that this is the main disadvantage of ticketlock.)
 (- ?)
 
 One bit still improved performance, is it the best we got?

So, the advantage of one bit is that if we use a whole byte for 1 bit we
can avoid some atomic ops.

The entire reason for this in-word spinner is to amortize the cost of
hitting the external node cacheline.

So traditional locks like test-and-test and the ticket lock only ever
access the spinlock word itsef, this MCS style queueing lock has a
second (and, see my other rants in this thread, when done wrong more
than 2) cacheline to touch.

That said, all our benchmarking is pretty much for the cache-hot case,
so I'm not entirely convinced yet that the one pending bit makes up for
it, it does in the cache-hot case.

But... writing cache-cold benchmarks is _hard_ :/


pgpJTA6J54nCZ.pgp
Description: PGP signature


Re: [PATCH v10 03/19] qspinlock: Add pending bit

2014-05-14 Thread Radim Krčmář
2014-05-14 19:00+0200, Peter Zijlstra:
 On Wed, May 14, 2014 at 06:51:24PM +0200, Radim Krčmář wrote:
  Ok.
  I've seen merit in pvqspinlock even with slightly slower first-waiter,
  so I would have happily sacrificed those horrible branches.
  (I prefer elegant to optimized code, but I can see why we want to be
   strictly better than ticketlock.)
  Peter mentioned that we are focusing on bare-metal patches, so I'll
  withold my other paravirt rants until they are polished.

(It was an ambiguous sentence, I have comments for later patches.)

 Well, paravirt must happen too, but comes later in this series, patch 3
 which we're replying to is still very much in the bare metal part of the
 series.

(I think that bare metal spans the first 7 patches.)

 I've not had time yet to decode all that Waiman has done to make
 paravirt work.
 
 But as a general rule I like patches that start with something simple
 and working and then optimize it, this series doesn't seem to quite
 grasp that.
 
  And to forcefully bring this thread a little bit on-topic:
  
  Pending-bit is effectively a lock in a lock, so I was wondering why
  don't we use more pending bits; advantages are the same, just diminished
  by the probability of having an ideally contended lock:
   - waiter won't be blocked on RAM access if critical section (or more)
 ends sooner
   - some unlucky cacheline is not forgotten
   - faster unlock (no need for tail operations)
  (- ?)
  disadvantages are magnified:
   - increased complexity
   - intense cacheline sharing
 (I thought that this is the main disadvantage of ticketlock.)
  (- ?)
  
  One bit still improved performance, is it the best we got?
 
 So, the advantage of one bit is that if we use a whole byte for 1 bit we
 can avoid some atomic ops.
 
 The entire reason for this in-word spinner is to amortize the cost of
 hitting the external node cacheline.

Every pending CPU removes one length of the critical section from the
delay caused by cacheline propagation and really cold cache is
hundreds(?) of cycles, so we could burn some to ensure correctness and
still be waiting when the first pending CPU unlocks.

 So traditional locks like test-and-test and the ticket lock only ever
 access the spinlock word itsef, this MCS style queueing lock has a
 second (and, see my other rants in this thread, when done wrong more
 than 2) cacheline to touch.
 
 That said, all our benchmarking is pretty much for the cache-hot case,
 so I'm not entirely convinced yet that the one pending bit makes up for
 it, it does in the cache-hot case.

Yeah, we probably use the faster pre-lock quite a lot.
Cover letter states that queue depth 1-3 is a bit slower than ticket
spinlock, so we might not be losing if we implemented a faster
in-word-lock of this capacity.  (Not that I'm a fan of the hybrid lock.)

 But... writing cache-cold benchmarks is _hard_ :/

Wouldn't clflush of the second cacheline before trying for the lock give
us a rough estimate?
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v10 03/19] qspinlock: Add pending bit

2014-05-13 Thread Waiman Long

On 05/12/2014 11:22 AM, Radim Krčmář wrote:

2014-05-07 11:01-0400, Waiman Long:

From: Peter Zijlstrapet...@infradead.org

Because the qspinlock needs to touch a second cacheline; add a pending
bit and allow a single in-word spinner before we punt to the second
cacheline.

I think there is an unwanted scenario on virtual machines:
1) VCPU sets the pending bit and start spinning.
2) Pending VCPU gets descheduled.
 - we have PLE and lock holder isn't running [1]
 - the hypervisor randomly preempts us
3) Lock holder unlocks while pending VCPU is waiting in queue.
4) Subsequent lockers will see free lock with set pending bit and will
loop in trylock's 'for (;;)'
 - the worst-case is lock starving [2]
 - PLE can save us from wasting whole timeslice

Retry threshold is the easiest solution, regardless of its ugliness [4].


Yes, that can be a real issue. Some sort of retry threshold, as you 
said, should be able to handle it.


BTW, the relevant patch should be 16/19 where the PV spinlock stuff 
should be discussed. This patch is perfectly fine.



Another minor design flaw is that formerly first VCPU gets appended to
the tail when it decides to queue;
is the performance gain worth it?

Thanks.


Yes, the performance gain is worth it. The primary goal is to be not 
worse than ticket spinlock in light load situation which is the most 
common case. This feature is need to achieve that.


-Longman
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v10 03/19] qspinlock: Add pending bit

2014-05-12 Thread Radim Krčmář
2014-05-07 11:01-0400, Waiman Long:
 From: Peter Zijlstra pet...@infradead.org
 
 Because the qspinlock needs to touch a second cacheline; add a pending
 bit and allow a single in-word spinner before we punt to the second
 cacheline.

I think there is an unwanted scenario on virtual machines:
1) VCPU sets the pending bit and start spinning.
2) Pending VCPU gets descheduled.
- we have PLE and lock holder isn't running [1]
- the hypervisor randomly preempts us
3) Lock holder unlocks while pending VCPU is waiting in queue.
4) Subsequent lockers will see free lock with set pending bit and will
   loop in trylock's 'for (;;)'
- the worst-case is lock starving [2]
- PLE can save us from wasting whole timeslice

Retry threshold is the easiest solution, regardless of its ugliness [4].

Another minor design flaw is that formerly first VCPU gets appended to
the tail when it decides to queue;
is the performance gain worth it?

Thanks.


---
1: Pause Loop Exiting is almost certain to vmexit in that case: we
   default to 4096 TSC cycles on KVM, and pending loop is longer than 4
   (4096/PSPIN_THRESHOLD).
   We would also vmexit if critical section was longer than 4k.

2: In this example, vpus 1 and 2 use the lock while 3 never gets there.
   VCPU:  1  2  3
lock()   // we are the holder
  pend() // we have pending bit
  vmexit // while in PSPIN_THRESHOLD loop
unlock()
 vmentry
 SPINNING// for {;;} loop
 vmexit
  vmentry
  lock()
pend()
vmexit
 unlock()
 vmentry
 SPINNING
 vmexit
vmentry
--- loop ---

   The window is (should be) too small to happen in bare-metal.

3: Pending VCPU was first in line, but when it decides to queue, it must
   go to the tail.

4:
The idea is to prevent unfairness by queueing after a while of useless
looping.  Magic value should be set a bit above the time it takes an
active pending bit holder to go through the loop.  4 looks enough.
We can use either pv_qspinlock_enabled() or cpu_has_hypervisor.
I presume that we never want this to happen in a VM and that we won't
have pv_qspinlock_enabled() without cpu_has_hypervisor.

diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c
index 37b5c7f..cd45c27 100644
--- a/kernel/locking/qspinlock.c
+++ b/kernel/locking/qspinlock.c
@@ -573,7 +573,7 @@ static __always_inline int get_qlock(struct qspinlock *lock)
 static inline int trylock_pending(struct qspinlock *lock, u32 *pval)
 {
u32 old, new, val = *pval;
-   int retry = 1;
+   int retry = 0;
 
/*
 * trylock || pending
@@ -595,9 +595,9 @@ static inline int trylock_pending(struct qspinlock *lock, 
u32 *pval)
 * a while to see if that either bit will be cleared.
 * If that is no change, we return and be queued.
 */
-   if (!retry)
+   if (retry)
return 0;
-   retry--;
+   retry++;
cpu_relax();
cpu_relax();
*pval = val = atomic_read(lock-val);
@@ -608,7 +608,11 @@ static inline int trylock_pending(struct qspinlock *lock, 
u32 *pval)
 * Assuming that the pending bit holder is going to
 * set the lock bit and clear the pending bit soon,
 * it is better to wait than to exit at this point.
+* Our assumption does not hold on hypervisors, where
+* the pending bit holder doesn't have to be running.
 */
+   if (cpu_has_hypervisor  ++retry  MAGIC)
+   return 0;
cpu_relax();
*pval = val = atomic_read(lock-val);
continue;
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v10 03/19] qspinlock: Add pending bit

2014-05-12 Thread Peter Zijlstra
On Mon, May 12, 2014 at 05:22:08PM +0200, Radim Krčmář wrote:
 2014-05-07 11:01-0400, Waiman Long:
  From: Peter Zijlstra pet...@infradead.org
  
  Because the qspinlock needs to touch a second cacheline; add a pending
  bit and allow a single in-word spinner before we punt to the second
  cacheline.
 
 I think there is an unwanted scenario on virtual machines:
 1) VCPU sets the pending bit and start spinning.
 2) Pending VCPU gets descheduled.
 - we have PLE and lock holder isn't running [1]
 - the hypervisor randomly preempts us
 3) Lock holder unlocks while pending VCPU is waiting in queue.
 4) Subsequent lockers will see free lock with set pending bit and will
loop in trylock's 'for (;;)'
 - the worst-case is lock starving [2]
 - PLE can save us from wasting whole timeslice
 
 Retry threshold is the easiest solution, regardless of its ugliness [4].
 
 Another minor design flaw is that formerly first VCPU gets appended to
 the tail when it decides to queue;
 is the performance gain worth it?

This is all for real hardware, I've not yet stared at the (para)virt
crap.

My primary concern is that native hardware runs good and that the
(para)virt support does rape the code -- so far its failing hard on the
second.


--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v10 03/19] qspinlock: Add pending bit

2014-05-09 Thread Waiman Long

On 05/08/2014 02:57 PM, Peter Zijlstra wrote:

On Wed, May 07, 2014 at 11:01:31AM -0400, Waiman Long wrote:

+/**
+ * trylock_pending - try to acquire queue spinlock using the pending bit
+ * @lock : Pointer to queue spinlock structure
+ * @pval : Pointer to value of the queue spinlock 32-bit word
+ * Return: 1 if lock acquired, 0 otherwise
+ */
+static inline int trylock_pending(struct qspinlock *lock, u32 *pval)

Still don't like you put it in a separate function, but you don't need
the pointer thing. Note how after you fail the trylock_pending() you
touch the second (node) cacheline.


@@ -110,6 +184,9 @@ void queue_spin_lock_slowpath(struct qspinlock *lock, u32 
val)

BUILD_BUG_ON(CONFIG_NR_CPUS= (1U  _Q_TAIL_CPU_BITS));

+   if (trylock_pending(lock,val))
+   return; /* Lock acquired */
+
node = this_cpu_ptr(mcs_nodes[0]);
idx = node-count++;
tail = encode_tail(smp_processor_id(), idx);
@@ -119,15 +196,18 @@ void queue_spin_lock_slowpath(struct qspinlock *lock, u32 
val)
node-next = NULL;

/*
+* we already touched the queueing cacheline; don't bother with pending
+* stuff.
+*
 * trylock || xchg(lock, node)
 *
-* 0,0 -  0,1 ; trylock
-* p,x -  n,x ; prev = xchg(lock, node)
+* 0,0,0 -  0,0,1 ; trylock
+* p,y,x -  n,y,x ; prev = xchg(lock, node)
 */

And any value of @val we might have had here is completely out-dated.
The only thing that makes sense it to set:

val = 0;

Which makes us start with a trylock, alternatively we can re-read val.


That is true. I will make the change to get rid of the pointer thing.

As for the separate trylock_pending function, my original goal was to 
have a better delineation of different portions of the code.  Given the 
fact that I broke up the slowpath function into 2 in a later patch, I 
may not really need to separate it out. I will pull it back in the next 
version.


-Longman
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v10 03/19] qspinlock: Add pending bit

2014-05-08 Thread Peter Zijlstra
On Wed, May 07, 2014 at 11:01:31AM -0400, Waiman Long wrote:
 +/**
 + * trylock_pending - try to acquire queue spinlock using the pending bit
 + * @lock : Pointer to queue spinlock structure
 + * @pval : Pointer to value of the queue spinlock 32-bit word
 + * Return: 1 if lock acquired, 0 otherwise
 + */
 +static inline int trylock_pending(struct qspinlock *lock, u32 *pval)

Still don't like you put it in a separate function, but you don't need
the pointer thing. Note how after you fail the trylock_pending() you
touch the second (node) cacheline.

 @@ -110,6 +184,9 @@ void queue_spin_lock_slowpath(struct qspinlock *lock, u32 
 val)
  
   BUILD_BUG_ON(CONFIG_NR_CPUS = (1U  _Q_TAIL_CPU_BITS));
  
 + if (trylock_pending(lock, val))
 + return; /* Lock acquired */
 +
   node = this_cpu_ptr(mcs_nodes[0]);
   idx = node-count++;
   tail = encode_tail(smp_processor_id(), idx);
 @@ -119,15 +196,18 @@ void queue_spin_lock_slowpath(struct qspinlock *lock, 
 u32 val)
   node-next = NULL;
  
   /*
 +  * we already touched the queueing cacheline; don't bother with pending
 +  * stuff.
 +  *
* trylock || xchg(lock, node)
*
 -  * 0,0 - 0,1 ; trylock
 -  * p,x - n,x ; prev = xchg(lock, node)
 +  * 0,0,0 - 0,0,1 ; trylock
 +  * p,y,x - n,y,x ; prev = xchg(lock, node)
*/

And any value of @val we might have had here is completely out-dated.
The only thing that makes sense it to set:

val = 0;

Which makes us start with a trylock, alternatively we can re-read val.

   for (;;) {
   new = _Q_LOCKED_VAL;
   if (val)
 - new = tail | (val  _Q_LOCKED_MASK);
 + new = tail | (val  _Q_LOCKED_PENDING_MASK);
  
   old = atomic_cmpxchg(lock-val, val, new);
   if (old == val)
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH v10 03/19] qspinlock: Add pending bit

2014-05-07 Thread Waiman Long
From: Peter Zijlstra pet...@infradead.org

Because the qspinlock needs to touch a second cacheline; add a pending
bit and allow a single in-word spinner before we punt to the second
cacheline.

Signed-off-by: Peter Zijlstra pet...@infradead.org
Signed-off-by: Waiman Long waiman.l...@hp.com
---
 include/asm-generic/qspinlock_types.h |   12 +++-
 kernel/locking/qspinlock.c|  121 +++--
 2 files changed, 110 insertions(+), 23 deletions(-)

diff --git a/include/asm-generic/qspinlock_types.h 
b/include/asm-generic/qspinlock_types.h
index f66f845..bd25081 100644
--- a/include/asm-generic/qspinlock_types.h
+++ b/include/asm-generic/qspinlock_types.h
@@ -39,8 +39,9 @@ typedef struct qspinlock {
  * Bitfields in the atomic value:
  *
  *  0- 7: locked byte
- *  8- 9: tail index
- * 10-31: tail cpu (+1)
+ * 8: pending
+ *  9-10: tail index
+ * 11-31: tail cpu (+1)
  */
 #define_Q_SET_MASK(type)   (((1U  _Q_ ## type ## _BITS) - 1)\
   _Q_ ## type ## _OFFSET)
@@ -48,7 +49,11 @@ typedef struct qspinlock {
 #define _Q_LOCKED_BITS 8
 #define _Q_LOCKED_MASK _Q_SET_MASK(LOCKED)
 
-#define _Q_TAIL_IDX_OFFSET (_Q_LOCKED_OFFSET + _Q_LOCKED_BITS)
+#define _Q_PENDING_OFFSET  (_Q_LOCKED_OFFSET + _Q_LOCKED_BITS)
+#define _Q_PENDING_BITS1
+#define _Q_PENDING_MASK_Q_SET_MASK(PENDING)
+
+#define _Q_TAIL_IDX_OFFSET (_Q_PENDING_OFFSET + _Q_PENDING_BITS)
 #define _Q_TAIL_IDX_BITS   2
 #define _Q_TAIL_IDX_MASK   _Q_SET_MASK(TAIL_IDX)
 
@@ -57,5 +62,6 @@ typedef struct qspinlock {
 #define _Q_TAIL_CPU_MASK   _Q_SET_MASK(TAIL_CPU)
 
 #define _Q_LOCKED_VAL  (1U  _Q_LOCKED_OFFSET)
+#define _Q_PENDING_VAL (1U  _Q_PENDING_OFFSET)
 
 #endif /* __ASM_GENERIC_QSPINLOCK_TYPES_H */
diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c
index b97a1ad..6467bfc 100644
--- a/kernel/locking/qspinlock.c
+++ b/kernel/locking/qspinlock.c
@@ -83,23 +83,97 @@ static inline struct mcs_spinlock *decode_tail(u32 tail)
return per_cpu_ptr(mcs_nodes[idx], cpu);
 }
 
+#define _Q_LOCKED_PENDING_MASK (_Q_LOCKED_MASK | _Q_PENDING_MASK)
+
+/**
+ * trylock_pending - try to acquire queue spinlock using the pending bit
+ * @lock : Pointer to queue spinlock structure
+ * @pval : Pointer to value of the queue spinlock 32-bit word
+ * Return: 1 if lock acquired, 0 otherwise
+ */
+static inline int trylock_pending(struct qspinlock *lock, u32 *pval)
+{
+   u32 old, new, val = *pval;
+
+   /*
+* trylock || pending
+*
+* 0,0,0 - 0,0,1 ; trylock
+* 0,0,1 - 0,1,1 ; pending
+*/
+   for (;;) {
+   /*
+* If we observe any contention; queue.
+*/
+   if (val  ~_Q_LOCKED_MASK)
+   return 0;
+
+   new = _Q_LOCKED_VAL;
+   if (val == new)
+   new |= _Q_PENDING_VAL;
+
+   old = atomic_cmpxchg(lock-val, val, new);
+   if (old == val)
+   break;
+
+   *pval = val = old;
+   }
+
+   /*
+* we won the trylock
+*/
+   if (new == _Q_LOCKED_VAL)
+   return 1;
+
+   /*
+* we're pending, wait for the owner to go away.
+*
+* *,1,1 - *,1,0
+*/
+   while ((val = atomic_read(lock-val))  _Q_LOCKED_MASK)
+   arch_mutex_cpu_relax();
+
+   /*
+* take ownership and clear the pending bit.
+*
+* *,1,0 - *,0,1
+*/
+   for (;;) {
+   new = (val  ~_Q_PENDING_MASK) | _Q_LOCKED_VAL;
+
+   old = atomic_cmpxchg(lock-val, val, new);
+   if (old == val)
+   break;
+
+   val = old;
+   }
+   return 1;
+}
+
 /**
  * queue_spin_lock_slowpath - acquire the queue spinlock
  * @lock: Pointer to queue spinlock structure
  * @val: Current value of the queue spinlock 32-bit word
  *
- * (queue tail, lock bit)
+ * (queue tail, pending bit, lock bit)
+ *
+ *  fast :slow  :unlock
+ *   :  :
+ * uncontended  (0,0,0) -:-- (0,0,1) --:-- 
(*,*,0)
+ *   :   | ^.--. /  :
+ *   :   v   \  \|  :
+ * pending   :(0,1,1) +-- (0,1,0)   \   |  :
+ *   :   | ^--'  |   |  :
+ *   :   v   |   |  :
+ * uncontended   :(n,x,y) +-- (n,0,0) --'   |  :
+ *   queue   :   | ^--'  |  :
+ *   :   v   |  :
+ * contended :(*,x,y) +-- (*,0,0) --- (*,0,1) -'