On 2021-03-17 15:38:52 [+0100], Thomas Gleixner wrote:
> With interrupt force threading all device interrupt handlers are invoked
> from kernel threads. Contrary to hard interrupt context the invocation only
> disables bottom halfs, but not interrupts. This was an oversight back then
> because any code like this will have an issue:
> 
> thread(irq_A)
>   irq_handler(A)
>     spin_lock(&foo->lock);
> 
> interrupt(irq_B)
>   irq_handler(B)
>     spin_lock(&foo->lock);

It will not because both threads will wake_up(thread). It is an issue if
- if &foo->lock is shared between a hrtimer and threaded-IRQ
- if &foo->lock is shared between a non-threaded and thread-IRQ
- if &foo->lock is shared between a printk() in hardirq context and
  thread-IRQ as I learned today.

> This has been triggered with networking (NAPI vs. hrtimers) and console
> drivers where printk() happens from an interrupt which interrupted the
> force threaded handler.
> 
> Now people noticed and started to change the spin_lock() in the handler to
> spin_lock_irqsave() which affects performance or add IRQF_NOTHREAD to the
> interrupt request which in turn breaks RT.
> 
> Fix the root cause and not the symptom and disable interrupts before
> invoking the force threaded handler which preserves the regular semantics
> and the usefulness of the interrupt force threading as a general debugging
> tool.
> 
> For not RT this is not changing much, except that during the execution of
> the threaded handler interrupts are delayed until the handler
> returns. Vs. scheduling and softirq processing there is no difference.
> 
> For RT kernels there is no issue.

Acked-by: Sebastian Andrzej Siewior <[email protected]>

> Fixes: 8d32a307e4fa ("genirq: Provide forced interrupt threading")
> Reported-by: Johan Hovold <[email protected]>
> Signed-off-by: Thomas Gleixner <[email protected]>
> Cc: Eric Dumazet <[email protected]>
> Cc: Sebastian Andrzej Siewior <[email protected]>
> Cc: netdev <[email protected]>
> Cc: "David S. Miller" <[email protected]>
> Cc: Krzysztof Kozlowski <[email protected]>
> Cc: Greg Kroah-Hartman <[email protected]>
> Cc: Andy Shevchenko <[email protected]>
> CC: Peter Zijlstra <[email protected]>
> Cc: [email protected]
> Cc: netdev <[email protected]>
> ---
>  kernel/irq/manage.c |    4 ++++
>  1 file changed, 4 insertions(+)
> 
> --- a/kernel/irq/manage.c
> +++ b/kernel/irq/manage.c
> @@ -1142,11 +1142,15 @@ irq_forced_thread_fn(struct irq_desc *de
>       irqreturn_t ret;
>  
>       local_bh_disable();
> +     if (!IS_ENABLED(CONFIG_PREEMPT_RT))
> +             local_irq_disable();
>       ret = action->thread_fn(action->irq, action->dev_id);
>       if (ret == IRQ_HANDLED)
>               atomic_inc(&desc->threads_handled);
>  
>       irq_finalize_oneshot(desc, action);
> +     if (!IS_ENABLED(CONFIG_PREEMPT_RT))
> +             local_irq_enable();
>       local_bh_enable();
>       return ret;
>  }

Sebastian

Reply via email to