On Wed, 2014-06-04 at 13:28 -0700, Davidlohr Bueso wrote: > On Wed, 2014-06-04 at 12:08 -0700, Jason Low wrote: > > In __mutex_trylock_slowpath(), we acquire the wait_lock spinlock, > > xchg() lock->count with -1, then set lock->count back to 0 if there > > are no waiters, and return true if the prev lock count was 1. > > > > However, if we the mutex is already locked, then there may not be > ^^ leave that out. > > > much point in attempting the above operations. > > Isn't this redundant? I mean, if we enter the slowpath its because > __mutex_fastpath_trylock() already failed so we already know that the > lock is taken.
This function is really just used as an alternative method of trylock for !__HAVE_ARCH_CMPXCHG. In that case, the fastpath can call directly into the slowpath function, without checking for if the lock is taken. > What kind of testing has this change been put through? Any advantages? > (ie: how many cycles are we saving here?), the trylock mechanism is > already pretty darn fast. While I did run tests with this patch, this particular patch shouldn't show benefits on my machine as it should be using the more efficient atomic_cmpxchg. The advantage is in !__HAVE_ARCH_CMPXCHG, where we would avoid taking a spinlock and 2 atomic operations when the mutex is already taken. Thanks. -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/