From: Dmitry Vyukov <dvyu...@google.com>
Date: Mon, 16 Jan 2017 22:10:52 +0100

> The current annotation uses a global variable as recursion counter.
> The variable is not atomic nor protected with a mutex, but mutated
> by multiple threads. This causes lockdep bug reports episodically:
> 
> BUG: looking up invalid subclass: 4294967295
> ...
> _raw_spin_lock_irqsave_nested+0x120/0x180
> hashbin_delete+0x4fe/0x750
> __irias_delete_object+0xab/0x170
> irias_delete_object+0x5f/0xc0
> ircomm_tty_detach_cable+0x1d5/0x3f0
> ...
> 
> Make the hashbin_lock_depth variable atomic to prevent bug reports.
> 
> As is this causes "unused variable 'depth'" warning without LOCKDEP.
> So also change raw_spin_lock_irqsave_nested() macro to not cause
> the warning without LOCKDEP. Similar to what raw_spin_lock_nested()
> already does.
> 
> Signed-off-by: Dmitry Vyukov <dvyu...@google.com>
> Cc: Dave Jones <da...@redhat.com>
> Cc: Samuel Ortiz <sam...@sortiz.org>
> Cc: David S. Miller <da...@davemloft.net>
> Cc: netdev@vger.kernel.org
> Fixes: c7630a4b932af ("[IrDA]: irda lockdep annotation")

I took a closer look at this, and really this whole lockdep thing is
more than a hack.

The basic problem is that the free_func() passed into hashbin_delete()
can trigger hashbin operations on other hashtables.

This function is destroying a hash table completely, and is doing so
in a context where no new hash table entries can be inserted.  The
last thing we do is kfree() the hashtable so this must be true.

Therefore, it is much better to just kill off all of this lockdep
stuff, and recode the teardown loop into something like:

        if (hashbin->hb_type & HB_LOCK)
                spin_lock_irqsave(&hashbin->hb_lock, flags);
        for (i = 0; i < HASHBIN_SIZE; i++) {
                while (1) {
                        queue = dequeue_first((irda_queue_t **) 
&hashbin->hb_queue[i]);

                        if (!queue)
                                break;
                        if (free_func) {
                                if (hashbin->hb_type & HB_LOCK)
                                        
spin_unlock_irqrestore(&hashbin->hb_lock, flags);
                                free_func(queue);
                                if (hashbin->hb_type & HB_LOCK)
                                        spin_lock_irqsave(&hashbin->hb_lock, 
flags);
                        }
                        
                }
        }
        hashbin->hb_current = NULL;
        hashbin->magic = ~HB_MAGIC;
        if (hashbin->hb_type & HB_LOCK)
                spin_unlock_irqrestore(&hashbin->hb_lock, flags);

At which point the recursive locking becomes impossible.

Reply via email to