On Wed, Jan 15, 2014 at 08:48:44AM +0100, Peter Zijlstra wrote: > On Wed, Jan 15, 2014 at 08:44:20AM +0100, Peter Zijlstra wrote: > > On Tue, Jan 14, 2014 at 04:33:08PM -0800, Jason Low wrote: > > > The mutex_can_spin_on_owner() function should also return false if the > > > task needs to be rescheduled. > > > > > > > While I was staring at mutex_can_spin_on_owner(); don't we need this? > > > > kernel/locking/mutex.c | 4 +++- > > 1 file changed, 3 insertions(+), 1 deletion(-) > > > > diff --git a/kernel/locking/mutex.c b/kernel/locking/mutex.c > > index 4dd6e4c219de..480d2f437964 100644 > > --- a/kernel/locking/mutex.c > > +++ b/kernel/locking/mutex.c > > @@ -214,8 +214,10 @@ static inline int mutex_can_spin_on_owner(struct mutex > > *lock) > > > > rcu_read_lock(); > > owner = ACCESS_ONCE(lock->owner); > > - if (owner) > > + if (owner) { > > That is, its an unmatched barrier, as mutex_set_owner() doesn't include > a barrier, and I don't think i needs to; but on alpha we still need this > read barrier to ensure we do not mess up this related load afaik. > > Paul? can you explain an unpaired read_barrier_depends?
That is an impressive one! ;-) My rationale for the code without smp_read_barrier_depends() is that (1) the task struct was already exposed to readers and (2) the check is heuristic in nature -- if we miss the assignment to ->on_cpu due to memory order (or for any other reason), we just sleep unnecessarily. If we did need full ordering (which I do -not- believe that we do at the moment) then the above ACCESS_ONCE() can become rcu_dereference() and mutex_set_owner() needs an smp_store_release(). So if we need a barrier here (which again I believe we do not), then there needs to be a paired barrier in mutex_set_owner(). Thanx, Paul > > + smp_read_barrier_depends(); > > retval = owner->on_cpu; > > + } > > rcu_read_unlock(); > > /* > > * if lock->owner is not set, the mutex owner may have just acquired > -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/