On Sat, 4 Mar 2017, Peter Zijlstra wrote:
> @@ -2166,36 +2252,43 @@ static int fixup_pi_state_owner(u32 __us
>       /*
> -      * To handle the page fault we need to drop the hash bucket
> -      * lock here. That gives the other task (either the highest priority
> -      * waiter itself or the task which stole the rtmutex) the
> -      * chance to try the fixup of the pi_state. So once we are
> -      * back from handling the fault we need to check the pi_state
> -      * after reacquiring the hash bucket lock and before trying to
> -      * do another fixup. When the fixup has been done already we
> -      * simply return.
> +      * To handle the page fault we need to drop the locks here. That gives
> +      * the other task (either the highest priority waiter itself or the
> +      * task which stole the rtmutex) the chance to try the fixup of the
> +      * pi_state. So once we are back from handling the fault we need to
> +      * check the pi_state after reacquiring the locks and before trying to
> +      * do another fixup. When the fixup has been done already we simply
> +      * return.
> +      *
> +      * Note: we hold both hb->lock and pi_mutex->wait_lock. We can safely
> +      * drop hb->lock since the caller owns the hb -> futex_q relation.
> +      * Dropping the pi_mutex->wait_lock requires the state revalidate.
>        */
>  handle_fault:
> +     raw_spin_unlock_irq(&pi_state->pi_mutex.wait_lock);
>       spin_unlock(q->lock_ptr);
>  
>       ret = fault_in_user_writeable(uaddr);
>  
>       spin_lock(q->lock_ptr);
> +     raw_spin_lock_irq(&pi_state->pi_mutex.wait_lock);
>  
>       /*
>        * Check if someone else fixed it for us:

Adding context:

         */
        if (pi_state->owner != oldowner)
                return 0;

        if (ret)
                return ret;

        goto retry;

Both 'return' statements leak &pi_state->pi_mutex.wait_lock ....

Thanks,

        tglx

Reply via email to