On Fri, Dec 04, 2020 at 06:01:55PM +0100, Thomas Gleixner wrote:
> From: Thomas Gleixner <t...@linutronix.de>
> 
> Provide a local lock based serialization for soft interrupts on RT which
> allows the local_bh_disabled() sections and servicing soft interrupts to be
> preemptible.
> 
> Provide the necessary inline helpers which allow to reuse the bulk of the
> softirq processing code.

> +struct softirq_ctrl {
> +     local_lock_t    lock;
> +     int             cnt;
> +};
> +
> +static DEFINE_PER_CPU(struct softirq_ctrl, softirq_ctrl) = {
> +     .lock   = INIT_LOCAL_LOCK(softirq_ctrl.lock),
> +};
> +
> +void __local_bh_disable_ip(unsigned long ip, unsigned int cnt)
> +{
> +     unsigned long flags;
> +     int newcnt;
> +
> +     WARN_ON_ONCE(in_hardirq());
> +
> +     /* First entry of a task into a BH disabled section? */
> +     if (!current->softirq_disable_cnt) {
> +             if (preemptible()) {
> +                     local_lock(&softirq_ctrl.lock);

AFAICT this significantly changes the locking rules.

Where previously we could do:

        spin_lock(&ponies)
        spin_lock_bh(&foo);

vs

        spin_lock_bh(&bar);
        spin_lock(&ponies)

provided the rest of the code observed: bar -> ponies -> foo
and never takes ponies from in-softirq.

This is now a genuine deadlock on RT.

Also see:

  https://lkml.kernel.org/r/x9cheyjuxwc75...@hirez.programming.kicks-ass.net

> +                     /* Required to meet the RCU bottomhalf requirements. */
> +                     rcu_read_lock();
> +             } else {
> +                     DEBUG_LOCKS_WARN_ON(this_cpu_read(softirq_ctrl.cnt));
> +             }
> +     }
> +
> +     /*
> +      * Track the per CPU softirq disabled state. On RT this is per CPU
> +      * state to allow preemption of bottom half disabled sections.
> +      */
> +     newcnt = __this_cpu_add_return(softirq_ctrl.cnt, cnt);
> +     /*
> +      * Reflect the result in the task state to prevent recursion on the
> +      * local lock and to make softirq_count() & al work.
> +      */
> +     current->softirq_disable_cnt = newcnt;
> +
> +     if (IS_ENABLED(CONFIG_TRACE_IRQFLAGS) && newcnt == cnt) {
> +             raw_local_irq_save(flags);
> +             lockdep_softirqs_off(ip);
> +             raw_local_irq_restore(flags);
> +     }
> +}


Reply via email to