There is a problem in the current trylock_pending() function.  When the
lock is free, but the pending bit holder hasn't grabbed the lock &
cleared the pending bit yet, the trylock_pending() function will fail.
As a result, the regular queuing code path will be used most of
the time even when there is only 2 tasks contending for the lock.
Assuming that the pending bit holder is going to get the lock and
clear the pending bit soon, it is actually better to wait than to be
queued up which has a higher overhead.

This patch modified the trylock_pending() function to wait until the
pending bit holder gets the lock and clears the pending bit. In case
both the lock and pending bits are set, the new code will also wait
a bit to see if either one is cleared. If they are not, it will quit
and be queued.

The following tables show the before-patch execution time (in ms)
of a micro-benchmark where 5M iterations of the lock/unlock cycles
were run on a 10-core Westere-EX x86-64 CPU with 2 different types of
loads - standalone (lock and protected data in different cachelines)
and embedded (lock and protected data in the same cacheline).

                  [Standalone/Embedded - same node]
  # of tasks    Ticket lock     Queue lock       %Change
  ----------    -----------     ----------       -------
       1          135/ 111       135/ 101          0%/  -9%
       2          890/ 779      1885/1990       +112%/+156%
       3         1932/1859      2333/2341        +21%/ +26%
       4         2829/2726      2900/2923         +3%/  +7%
       5         3834/3761      3655/3648         -5%/  -3%
       6         4963/4976      4336/4326        -13%/ -13%
       7         6299/6269      5057/5064        -20%/ -19%
       8         7691/7569      5786/5798        -25%/ -23%

With 1 task per NUMA node, the execution times are:

                [Standalone - different nodes]
  # of nodes    Ticket lock     Queue lock      %Change
  ----------    -----------     ----------      -------
       1           135            135             0%
       2          4604           5087           +10%
       3         10940          12224           +12%
       4         21555          10555           -51%

It can be seen that the queue spinlock is slower than the ticket
spinlock when there are 2 or 3 contending tasks. In all the other case,
the queue spinlock is either equal or faster than the ticket spinlock.

With this patch, the performance data for 2 contending tasks are:

                  [Standalone/Embedded]
  # of tasks    Ticket lock     Queue lock      %Change
  ----------    -----------     ----------      -------
       2          890/779        984/871        +11%/+12%

                [Standalone - different nodes]
  # of nodes    Ticket lock     Queue lock      %Change
  ----------    -----------     ----------      -------
       2          4604             1364           -70%

It can be seen that the queue spinlock performance for 2 contending
tasks is now comparable to ticket spinlock on the same node, but much
faster when in different nodes. With 3 contending tasks, however,
the ticket spinlock is still quite a bit faster.

Signed-off-by: Waiman Long <waiman.l...@hp.com>
---
 kernel/locking/qspinlock.c |   32 ++++++++++++++++++++++++++++++--
 1 files changed, 30 insertions(+), 2 deletions(-)

diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c
index 55601b4..497da24 100644
--- a/kernel/locking/qspinlock.c
+++ b/kernel/locking/qspinlock.c
@@ -216,6 +216,7 @@ xchg_tail(struct qspinlock *lock, u32 tail, u32 *pval)
 static inline int trylock_pending(struct qspinlock *lock, u32 *pval)
 {
        u32 old, new, val = *pval;
+       int retry = 1;
 
        /*
         * trylock || pending
@@ -225,11 +226,38 @@ static inline int trylock_pending(struct qspinlock *lock, 
u32 *pval)
         */
        for (;;) {
                /*
-                * If we observe any contention; queue.
+                * If we observe that the queue is not empty,
+                * return and be queued.
                 */
-               if (val & ~_Q_LOCKED_MASK)
+               if (val & _Q_TAIL_MASK)
                        return 0;
 
+               if ((val & _Q_LOCKED_PENDING_MASK) ==
+                   (_Q_LOCKED_VAL|_Q_PENDING_VAL)) {
+                       /*
+                        * If both the lock and pending bits are set, we wait
+                        * a while to see if that either bit will be cleared.
+                        * If that is no change, we return and be queued.
+                        */
+                       if (!retry)
+                               return 0;
+                       retry--;
+                       cpu_relax();
+                       cpu_relax();
+                       *pval = val = atomic_read(&lock->val);
+                       continue;
+               } else if ((val & _Q_LOCKED_PENDING_MASK) == _Q_PENDING_VAL) {
+                       /*
+                        * Pending bit is set, but not the lock bit.
+                        * Assuming that the pending bit holder is going to
+                        * set the lock bit and clear the pending bit soon,
+                        * it is better to wait than to exit at this point.
+                        */
+                       cpu_relax();
+                       *pval = val = atomic_read(&lock->val);
+                       continue;
+               }
+
                new = _Q_LOCKED_VAL;
                if (val == new)
                        new |= _Q_PENDING_VAL;
-- 
1.7.1

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to