On 10/13/2015 02:23 PM, Peter Zijlstra wrote:
On Tue, Sep 22, 2015 at 04:50:43PM -0400, Waiman Long wrote:
        for (;; waitcnt++) {
+               loop = SPIN_THRESHOLD;
+               while (loop) {
+                       /*
+                        * Spin until the lock is free
+                        */
+                       for (; loop&&  READ_ONCE(l->locked); loop--)
+                               cpu_relax();
+                       /*
+                        * Seeing the lock is free, this queue head vCPU is
+                        * the rightful next owner of the lock. However, the
+                        * lock may have just been stolen by another task which
+                        * has entered the slowpath. So we need to use atomic
+                        * operation to make sure that we really get the lock.
+                        * Otherwise, we have to wait again.
+                        */
+                       if (cmpxchg(&l->locked, 0, _Q_LOCKED_VAL) == 0)
+                               goto gotlock;
                }
                for (loop = SPIN_THRESHOLD; loop; --loop) {
                        if (!READ_ONCE(l->locked)&&
                        cmpxchg(&l->locked, 0, _Q_LOCKED_VA) == 0)
                                goto gotlock;

                        cpu_relax();
                }


This was the code that I used in my original patch, but it seems to confuse you about doing too many lock stealing. So I separated it out to make my intention more explicit. I will change it back to the old code.

Cheers,
Longman


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to