On Sat, Oct 07, 2017 at 05:43:32PM +0000, Levin, Alexander (Sasha Levin) wrote:
> On Sat, Oct 07, 2017 at 11:15:17AM +0200, Peter Zijlstra wrote:
> >On Sat, Oct 07, 2017 at 02:07:26AM +0000, Levin, Alexander (Sasha Levin) 
> >wrote:
> >> And quite a few lines of your added trace (lmk if you need more, or all):
> >
> >Yeah, could you please upload all of it somewhere? That chunk didn't
> >include any hotplug bits at all.
> 
> Attached. It's the stack trace followed by everything else.

Much thanks, clue:

[ 2073.495414] NOHZ: local_softirq_pending 202 -- (TIMER_SOFTIRQ | RCU_SOFTIRQ)

cpuhp/2-22    [002] ....   104.797971: sched_cpu_deactivate: not-active: 2 
mask: 0-1,3-4,6-7,9

cpuhp/2-22    [002] ....   104.825166: sched_cpu_deactivate: rcu-sync: 2

migration/2-24    [002] ..s1   104.855994: rebalance_domains: rcu-read-lock: 2
migration/2-24    [002] ..s1   104.856000: load_balance: dst_cpu: 2 cpus: 6


supposed fix; could you please verify?

---
 kernel/sched/fair.c | 7 +++++++
 1 file changed, 7 insertions(+)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 350dbec01523..35f0168fb609 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -8392,6 +8392,13 @@ static int should_we_balance(struct lb_env *env)
        struct sched_group *sg = env->sd->groups;
        int cpu, balance_cpu = -1;
 
+       /*
+        * Ensure the balancing environment is consistent; can happen
+        * when the softirq triggers 'during' hotplug.
+        */
+       if (!cpumask_test_cpu(env->dst_cpu, env->cpus))
+               return 0;
+
        /*
         * In the newly idle case, we will allow all the cpu's
         * to do the newly idle load balance.

Reply via email to