On Wed, Nov 18, 2020 at 11:56:21PM +0100, Marco Elver wrote:
> On Tue, Nov 17, 2020 at 10:29AM -0800, Paul E. McKenney wrote:
> [...] 
> > But it would be good to get the kcompactd() people to look at this (not
> > immediately seeing who they are in MAINTAINERS).  Perhaps preemption is
> > disabled somehow and I am failing to see it.
> > 
> > Failing that, maybe someone knows of a way to check for overly long
> > timeout handlers.
> 
> I think I figured out one piece of the puzzle. Bisection keeps pointing
> me at some -rcu merge commit, which kept throwing me off. Nor did it
> help that reproduction is a bit flaky. However, I think there are 2
> independent problems, but the manifestation of 1 problem triggers the
> 2nd problem:
> 
> 1. problem: slowed forward progress (workqueue lockup / RCU stall reports)
> 
> 2. problem: DEADLOCK which causes complete system lockup
> 
>       | ...
>       |        CPU0
>       |        ----
>       |   lock(rcu_node_0);
>       |   <Interrupt>
>       |     lock(rcu_node_0);
>       | 
>       |  *** DEADLOCK ***
>       | 
>       | 1 lock held by event_benchmark/105:
>       |  #0: ffffbb6e0b804458 (rcu_node_0){?.-.}-{2:2}, at: 
> print_other_cpu_stall kernel/rcu/tree_stall.h:493 [inline]
>       |  #0: ffffbb6e0b804458 (rcu_node_0){?.-.}-{2:2}, at: check_cpu_stall 
> kernel/rcu/tree_stall.h:652 [inline]
>       |  #0: ffffbb6e0b804458 (rcu_node_0){?.-.}-{2:2}, at: rcu_pending 
> kernel/rcu/tree.c:3752 [inline]
>       |  #0: ffffbb6e0b804458 (rcu_node_0){?.-.}-{2:2}, at: 
> rcu_sched_clock_irq+0x428/0xd40 kernel/rcu/tree.c:2581
>       | ...
> 
> Problem 2 can with reasonable confidence (5 trials) be fixed by reverting:
> 
>       rcu: Don't invoke try_invoke_on_locked_down_task() with irqs disabled
> 
> At which point the system always boots to user space -- albeit with a
> bunch of warnings still (attached). The supposed "good" version doesn't
> end up with all those warnings deterministically, so I couldn't say if
> the warnings are expected due to recent changes or not (Arm64 QEMU
> emulation, 1 CPU, and lots of debugging tools on).
> 
> Does any of that make sense?

Marco, it makes all too much sense!  :-/

Does the patch below help?

                                                        Thanx, Paul

------------------------------------------------------------------------

commit 444ef3bbd0f243b912fdfd51f326704f8ee872bf
Author: Peter Zijlstra <pet...@infradead.org>
Date:   Sat Aug 29 10:22:24 2020 -0700

    sched/core: Allow try_invoke_on_locked_down_task() with irqs disabled
    
    The try_invoke_on_locked_down_task() function currently requires
    that interrupts be enabled, but it is called with interrupts
    disabled from rcu_print_task_stall(), resulting in an "IRQs not
    enabled as expected" diagnostic.  This commit therefore updates
    try_invoke_on_locked_down_task() to use raw_spin_lock_irqsave() instead
    of raw_spin_lock_irq(), thus allowing use from either context.
    
    Link: https://lore.kernel.org/lkml/000000000000903d5805ab908...@google.com/
    Link: 
https://lore.kernel.org/lkml/20200928075729.gc2...@hirez.programming.kicks-ass.net/
    Reported-by: syzbot+cb3b69ae80afd6535...@syzkaller.appspotmail.com
    Signed-off-by: Peter Zijlstra <pet...@infradead.org>
    Signed-off-by: Paul E. McKenney <paul...@kernel.org>

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index e172f2d..09ef5cf 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -2984,7 +2984,7 @@ try_to_wake_up(struct task_struct *p, unsigned int state, 
int wake_flags)
 
 /**
  * try_invoke_on_locked_down_task - Invoke a function on task in fixed state
- * @p: Process for which the function is to be invoked.
+ * @p: Process for which the function is to be invoked, can be @current.
  * @func: Function to invoke.
  * @arg: Argument to function.
  *
@@ -3002,12 +3002,11 @@ try_to_wake_up(struct task_struct *p, unsigned int 
state, int wake_flags)
  */
 bool try_invoke_on_locked_down_task(struct task_struct *p, bool (*func)(struct 
task_struct *t, void *arg), void *arg)
 {
-       bool ret = false;
        struct rq_flags rf;
+       bool ret = false;
        struct rq *rq;
 
-       lockdep_assert_irqs_enabled();
-       raw_spin_lock_irq(&p->pi_lock);
+       raw_spin_lock_irqsave(&p->pi_lock, rf.flags);
        if (p->on_rq) {
                rq = __task_rq_lock(p, &rf);
                if (task_rq(p) == rq)
@@ -3024,7 +3023,7 @@ bool try_invoke_on_locked_down_task(struct task_struct 
*p, bool (*func)(struct t
                                ret = func(p, arg);
                }
        }
-       raw_spin_unlock_irq(&p->pi_lock);
+       raw_spin_unlock_irqrestore(&p->pi_lock, rf.flags);
        return ret;
 }
 

Reply via email to