On Fri, Feb 03, 2017 at 01:38:16AM +0900, J. R. Okajima wrote:
> A simple consolidataion. The behaviour should not change.
> 
> Signed-off-by: J. R. Okajima <hooanon...@gmail.com>
> ---
>  kernel/locking/lockdep.c | 39 +++++++++++++++++++++------------------
>  1 file changed, 21 insertions(+), 18 deletions(-)
> 
> diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
> index b7a2001..7dc8f8e 100644
> --- a/kernel/locking/lockdep.c
> +++ b/kernel/locking/lockdep.c
> @@ -3464,6 +3464,23 @@ static struct held_lock *find_held_lock(struct 
> task_struct *curr,
>       return ret;
>  }
>  
> +static int validate_held_lock(struct task_struct *curr, unsigned int depth,
> +                           int idx)
> +{
> +     struct held_lock *hlock;
> +
> +     for (hlock = curr->held_locks + idx; idx < depth; idx++, hlock++)
> +             if (!__lock_acquire(hlock->instance,
> +                                 hlock_class(hlock)->subclass,
> +                                 hlock->trylock,
> +                                 hlock->read, hlock->check,
> +                                 hlock->hardirqs_off,
> +                                 hlock->nest_lock, hlock->acquire_ip,
> +                                 hlock->references, hlock->pin_count))
> +                     return 1;
> +     return 0;
> +}

I added the extra { } required by coding style and renamed the function
to reacquire_held_locks().

Plural because it has the loop, and reacquire because that is what it
does. Alternatively 'rebuild' is also possible if someone really doesn't
like reacquire.

Reply via email to