On Mon, 09 Jun 2014 20:28:08 -0000
Thomas Gleixner <t...@linutronix.de> wrote:

> Add commentry to document the chain walk and the protection mechanisms
> and their scope.
> 
> Signed-off-by: Thomas Gleixner <t...@linutronix.de>
> ---
>  kernel/locking/rtmutex.c |   52 
> +++++++++++++++++++++++++++++++++++++++++++++++
>  1 file changed, 52 insertions(+)
> 
> Index: tip/kernel/locking/rtmutex.c
> ===================================================================
> --- tip.orig/kernel/locking/rtmutex.c
> +++ tip/kernel/locking/rtmutex.c
> @@ -285,6 +285,47 @@ static inline struct rt_mutex *task_bloc
>   * @top_task:        the current top waiter
>   *
>   * Returns 0 or -EDEADLK.
> + *
> + * Chain walk basics and protection scope
> + *
> + * [A] refcount on task
> + * [B] task->pi_lock held
> + * [C] rtmutex->lock held

A,B, C is rather meaningless, and requires constant looking back up at
the key. Perhaps [R],[P] and [L]

 [R] refcount on task (get_task_struct)
 [P] task->pi_lock held
 [L] rtmutex->lock held


That way we can associate R being refcount, P being pi_lock and L being
lock. Easier to remember.


> + *
> + * call()                                    Protected by

"call()"?

> + *   @task                                   [A]
> + *   @orig_lock if != NULL                   @top_task is blocked on it
> + *   @next_lock                              Unprotected. Cannot be
> + *                                           dereferenced. Only used for
> + *                                           comparison.
> + *   @orig_waiter if != NULL                 @top_task is blocked on it
> + *   @top_task                               current, or in case of proxy
> + *                                           locking protected by calling
> + *                                           code
> + * again:
> + *   loop_sanity_check();
> + * retry:
> + *   lock(task->pi_lock);                    [A] acquire [B]
> + *   waiter = task->pi_blocked_on;           [B]
> + *   check_exit_conditions();                [B]
> + *   lock = waiter->lock;                    [B]
> + *   if (!try_lock(lock->wait_lock)) {       [B] try to acquire [C]
> + *           unlock(task->pi_lock);          drop [B]
> + *           goto retry;
> + *   }
> + *   check_exit_conditions();                [B] + [C]
> + *   requeue_lock_waiter(lock, waiter);      [B] + [C]
> + *   unlock(task->pi_lock);                  drop [B]
> + *   drop_task_ref(task);                    drop [A]

Maybe just state "put_task_struct()", less abstractions.

> + *   check_exit_conditions();                [C]
> + *   task = owner(lock);                     [C]
> + *   get_task_ref(task);                     [C] acquire [A]

get_task_struct()

-- Steve

> + *   lock(task->pi_lock);                    [C] acquire [B]
> + *   requeue_pi_waiter(task, waiters(lock)); [B] + [C]
> + *   check_exit_conditions();                [B] + [C]
> + *   unlock(task->pi_lock);                  drop [B]
> + *   unlock(lock->wait_lock);                drop [C]
> + *   goto again;
>   */
>  static int rt_mutex_adjust_prio_chain(struct task_struct *task,
>                                     int deadlock_detect,
> @@ -326,6 +367,12 @@ static int rt_mutex_adjust_prio_chain(st
>  
>               return -EDEADLK;
>       }
> +
> +     /*
> +      * We are fully preemptible here and only hold the refcount on
> +      * @task. So everything can have changed under us since the
> +      * caller or our own code below (goto retry) dropped all locks.
> +      */
>   retry:
>       /*
>        * Task can not go away as we did a get_task() before !
> @@ -383,6 +430,11 @@ static int rt_mutex_adjust_prio_chain(st
>       if (!detect_deadlock && waiter->prio == task->prio)
>               goto out_unlock_pi;
>  
> +     /*
> +      * We need to trylock here as we are holding task->pi_lock,
> +      * which is the reverse lock order versus the other rtmutex
> +      * operations.
> +      */
>       lock = waiter->lock;
>       if (!raw_spin_trylock(&lock->wait_lock)) {
>               raw_spin_unlock_irqrestore(&task->pi_lock, flags);
> 

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to