Hi Peter,

On Wed, Mar 30, 2016 at 11:36:59AM +0200, Peter Zijlstra wrote:
> On Tue, Mar 29, 2016 at 10:47:02AM +0200, Ingo Molnar wrote:
> 
> > > You are right; this is lockdep running into a hash collision; which is a 
> > > new 
> > > DEBUG_LOCKDEP test. See 9e4e7554e755 ("locking/lockdep: Detect chain_key 
> > > collisions").
> > 
> > I've Cc:-ed Alfredo Alvarez Fernandez who added that test.
> 
> OK, so while the code in check_no_collision() seems sensible, it relies
> on borken bits.
> 
> The whole chain_hlocks and /proc/lockdep_chains stuff appears to have
> been buggered from the start.
> 
> The below patch should fix this.
> 
> Furthermore, our hash function has definite room for improvement.
> 
> ---
>  include/linux/lockdep.h       |  8 +++++---
>  kernel/locking/lockdep.c      | 30 ++++++++++++++++++++++++------
>  kernel/locking/lockdep_proc.c |  2 ++
>  3 files changed, 31 insertions(+), 9 deletions(-)
> 
> diff --git a/include/linux/lockdep.h b/include/linux/lockdep.h
> index d026b190c530..2568c120513b 100644
> --- a/include/linux/lockdep.h
> +++ b/include/linux/lockdep.h
> @@ -196,9 +196,11 @@ struct lock_list {
>   * We record lock dependency chains, so that we can cache them:
>   */
>  struct lock_chain {
> -     u8                              irq_context;
> -     u8                              depth;
> -     u16                             base;
> +     /* see BUILD_BUG_ON()s in lookup_chain_cache() */
> +     unsigned int                    irq_context :  2,
> +                                     depth       :  6,
> +                                     base        : 24;
> +     /* 4 byte hole */
>       struct hlist_node               entry;
>       u64                             chain_key;
>  };
> diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
> index 53ab2f85d77e..91a4b7780afb 100644
> --- a/kernel/locking/lockdep.c
> +++ b/kernel/locking/lockdep.c

[...]

> @@ -2860,11 +2882,6 @@ static int separate_irq_context(struct task_struct 
> *curr,
>  {
>       unsigned int depth = curr->lockdep_depth;
>  
> -     /*
> -      * Keep track of points where we cross into an interrupt context:
> -      */
> -     hlock->irq_context = 2*(curr->hardirq_context ? 1 : 0) +
> -                             curr->softirq_context;
>       if (depth) {
>               struct held_lock *prev_hlock;
>  
> @@ -3164,6 +3181,7 @@ static int __lock_acquire(struct lockdep_map *lock, 
> unsigned int subclass,
>       hlock->acquire_ip = ip;
>       hlock->instance = lock;
>       hlock->nest_lock = nest_lock;
> +     hlock->irq_context = 2*(!!curr->hardirq_context) + 
> !!curr->softirq_context;
>       hlock->trylock = trylock;
>       hlock->read = read;
>       hlock->check = check;

This is just for cleaning up, right? However ->hardirq_context and
->softirq_context only defined when CONFIG_TRACE_IRQFLAGS=y.

So we should use macro like current_hardirq_context() here? Or
considering the two helpers introduced in my RFC:

http://lkml.kernel.org/g/1455602265-16490-2-git-send-email-boqun.f...@gmail.com

if you don't think that overkills ;-)

Regards,
Boqun

[...]

Attachment: signature.asc
Description: PGP signature

Reply via email to