On 08/14/2014 01:30 PM, Davidlohr Bueso wrote:
On Thu, 2014-08-14 at 13:17 -0400, Waiman Long wrote:
I still think it is better to do that after spin_lock_mutex().
As mentioned, this causes all sorts of hung tasks when the another task
enters the slowpath when locking. There's a big fat comment above.
In
addition, the atomic_set() is racy. It is better to something like
Why is it racy? Atomically setting the lock to -1 given that the lock
was stolen should be safe. The alternative we discussed with Jason was
to set the counter to -1 in the spinning path. But given that we need to
serialize the counter check with the list_empty() check that would
require the wait_lock. This is very messy and unnecessarily complicates
things.
Let's consider the following scenario:
Task 1 Task 2
------ ------
steal the lock
if (mutex_has_owner) { :
: <---- a long interrupt mutex_unlock() [cnt = 1]
atomic_set(cnt, -1);
return;
}
Now the lock is no longer available and all the tasks that are trying
to get it will hang. IOW, you cannot set the count to -1 unless you
are sure it is 0 to begin with.
if (atomic_cmpxchg(&lock->count, 0, -1)<= 0)
return;
Not really because some archs leave the lock at 1 after the unlock
fastpath.
Yes, I know that. I am saying x86 won't get any benefit from this patch.
-Longman
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/