On 2016/1/22 21:59, Waiman Long wrote:
> On 01/22/2016 06:06 AM, Peter Zijlstra wrote:
>> On Fri, Jan 22, 2016 at 11:56:52AM +0100, Peter Zijlstra wrote:
>>> On Fri, Jan 22, 2016 at 11:53:12AM +0100, Peter Zijlstra wrote:
>>>
>>>> There might be other details, but this is the one that stood out.
>>> I think this also does the wrong thing for use_ww_ctx.
>> Something like so?
> 
> I think that should work. My only minor concern is that putting the waiter 
> spinner at the end of the OSQ will take it longer to get the lock, but that 
> shouldn't be a big issue.
> 
>> ---
>>   kernel/locking/mutex.c | 15 +++++++++++++++
>>   1 file changed, 15 insertions(+)
>>
>> diff --git a/kernel/locking/mutex.c b/kernel/locking/mutex.c
>> index 0551c219c40e..070a0ac34aa7 100644
>> --- a/kernel/locking/mutex.c
>> +++ b/kernel/locking/mutex.c
>> @@ -512,6 +512,7 @@ __mutex_lock_common(struct mutex *lock, long state, 
>> unsigned int subclass,
>>       struct task_struct *task = current;
>>       struct mutex_waiter waiter;
>>       unsigned long flags;
>> +    bool acquired;
>>       int ret;
>>
>>       preempt_disable();
>> @@ -543,6 +544,7 @@ __mutex_lock_common(struct mutex *lock, long state, 
>> unsigned int subclass,
>>       lock_contended(&lock->dep_map, ip);
>>
>>       for (;;) {
>> +        acquired = false;
>>           /*
>>            * Lets try to take the lock again - this is needed even if
>>            * we get here for the first time (shortly after failing to
>> @@ -577,7 +579,16 @@ __mutex_lock_common(struct mutex *lock, long state, 
>> unsigned int subclass,
>>           /* didn't get the lock, go to sleep: */
>>           spin_unlock_mutex(&lock->wait_lock, flags);
>>           schedule_preempt_disabled();
>> +
>> +        if (mutex_is_locked(lock))
>> +            acquired = mutex_optimistic_spin(lock, ww_ctx, use_ww_ctx);
>> +
>>           spin_lock_mutex(&lock->wait_lock, flags);
>> +
>> +        if (acquired) {
>> +            atomic_set(&lock->count, -1);
>> +            break;
>> +        }
>>       }
>>       __set_task_state(task, TASK_RUNNING);
>>
>> @@ -587,6 +598,9 @@ __mutex_lock_common(struct mutex *lock, long state, 
>> unsigned int subclass,
>>           atomic_set(&lock->count, 0);
>>       debug_mutex_free_waiter(&waiter);
>>
>> +    if (acquired)
>> +        goto unlock;
>> +
>>   skip_wait:
>>       /* got the lock - cleanup and rejoice! */
>>       lock_acquired(&lock->dep_map, ip);
>> @@ -597,6 +611,7 @@ __mutex_lock_common(struct mutex *lock, long state, 
>> unsigned int subclass,
>>           ww_mutex_set_context_slowpath(ww, ww_ctx);
>>       }
>>
>> +unlock:
>>       spin_unlock_mutex(&lock->wait_lock, flags);
>>       preempt_enable();
>>       return 0;
> 
> Cheers,
> Longman
> 

looks good to me, I will try this solution and report the result, thanks 
everyone.

Ding

> .
> 


Reply via email to