On Mon, May 04, 2020 at 01:40:42PM -0400, Steven Rostedt wrote:
> Seems that your patch caused a lockdep splat on my box:
> 
>  ========================================================
>  WARNING: possible irq lock inversion dependency detected
>  5.7.0-rc3-test+ #249 Not tainted
>  --------------------------------------------------------
>  swapper/4/0 just changed the state of lock:
>  ffff9a580fdd75a0 (&ndev->lock){++.-}-{2:2}, at: 
> mld_ifc_timer_expire+0x3c/0x350
>  but this lock took another, SOFTIRQ-unsafe lock in the past:
>   (pgd_lock){+.+.}-{2:2}
>  
>  
>  and interrupts could create inverse lock ordering between them.
>  
>  
>  other info that might help us debug this:
>   Possible interrupt unsafe locking scenario:
>  
>         CPU0                    CPU1
>         ----                    ----
>    lock(pgd_lock);
>                                 local_irq_disable();
>                                 lock(&ndev->lock);
>                                 lock(pgd_lock);
>    <Interrupt>
>      lock(&ndev->lock);
>  
>   *** DEADLOCK ***

Fair point, but this just shows how problematic it is to call something
like vmalloc_sync_mappings() from a lower-level kernel API function.
The obvious fix for this would be to make pgd_lock irq-safe, but this is
getting more and more ridiculous.

I know you don't like to have a vmalloc_sync_mappings() call in the
tracing code, but can you live with it until we get rid of this broken
interface?

My plan for this is to use a small bitmap to track in the vmalloc and
the (x86-)ioremap code at which levels of the page-tables the code made
changes and combine that with an architecture-dependend mask to decide
whether anything needs to be synced.

On x86-64 the sync would be necessary at most 64 times after boot, so I
think this will only have a very small performance impact, even with
VMAP_STACKS. And as a bonus it would also get rid of vmalloc faulting on
x86, fixing the issue with tracing too.

Regards,

        Joerg

Reply via email to