On 12/03/2018 07:28 PM, Bart Van Assche wrote:
> Cc: Peter Zijlstra <[email protected]>
> Cc: Waiman Long <[email protected]>
> Cc: Johannes Berg <[email protected]>
> Signed-off-by: Bart Van Assche <[email protected]>
> ---
>  kernel/locking/lockdep.c | 9 +++++++++
>  1 file changed, 9 insertions(+)
>
> diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
> index c936fce5b9d7..b4772e5fc176 100644
> --- a/kernel/locking/lockdep.c
> +++ b/kernel/locking/lockdep.c
> @@ -727,6 +727,15 @@ static bool assign_lock_key(struct lockdep_map *lock)
>  {
>       unsigned long can_addr, addr = (unsigned long)lock;
>  
> +     /*
> +      * lockdep_free_key_range() assumes that struct lock_class_key
> +      * objects do not overlap. Since we use the address of lock
> +      * objects as class key for static objects, check whether the
> +      * size of lock_class_key objects does not exceed the size of
> +      * the smallest lock object.
> +      */
> +     BUILD_BUG_ON(sizeof(struct lock_class_key) > sizeof(raw_spinlock_t));
> +
>       if (__is_kernel_percpu_address(addr, &can_addr))
>               lock->key = (void *)can_addr;
>       else if (__is_module_percpu_address(addr, &can_addr))

I don't understand what this check is for. lock_class_key and spinlock
are different objects. Their relative size shouldn't matter.

Cheers,
Longman

Reply via email to