On Mon, Sep 14, 2015 at 03:15:20PM -0400, Waiman Long wrote: > On 09/14/2015 10:00 AM, Peter Zijlstra wrote: > >On Fri, Sep 11, 2015 at 02:37:37PM -0400, Waiman Long wrote: > >>This patch allows one attempt for the lock waiter to steal the lock ^^^
> >>when entering the PV slowpath. This helps to reduce the performance > >>penalty caused by lock waiter preemption while not having much of > >>the downsides of a real unfair lock. > >>@@ -415,8 +458,12 @@ static void pv_wait_head(struct qspinlock *lock, > >>struct mcs_spinlock *node) > >> > >> for (;; waitcnt++) { > >> for (loop = SPIN_THRESHOLD; loop; loop--) { > >>- if (!READ_ONCE(l->locked)) > >>- return; > >>+ /* > >>+ * Try to acquire the lock when it is free. > >>+ */ > >>+ if (!READ_ONCE(l->locked)&& > >>+ (cmpxchg(&l->locked, 0, _Q_LOCKED_VAL) == 0)) > >>+ goto gotlock; > >> cpu_relax(); > >> } > >> > >This isn't _once_, this is once per 'wakeup'. And note that interrupts > >unrelated to the kick can equally wake the vCPU up. > > void queued_spin_lock_slowpath(struct qspinlock *lock, u32 val) > > { > > : > > /* > > * We touched a (possibly) cold cacheline in the per-cpu queue node; > > * attempt the trylock once more in the hope someone let go while we > > * weren't watching. > > */ > > if (queued_spin_trylock(lock)) > > goto release; > > This is the only place where I consider lock stealing happens. Again, I > should have a comment in pv_queued_spin_trylock_unfair() to say where it > will be called. But you're not adding that.. What you did add is a steal in pv_wait_head(), and its not even once per pv_wait_head, its inside the spin loop (I read it wrong yesterday). So that makes the entire Changelog complete crap. There isn't _one_ attempt, and there is absolutely no fairness left. -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/