On 23/10/20 11:12, Peter Zijlstra wrote: > @@ -7006,15 +7024,20 @@ static bool balance_push(struct rq *rq) > * Both the cpu-hotplug and stop task are in this case and are > * required to complete the hotplug process. > */ > - if (is_per_cpu_kthread(push_task)) { > + if (is_per_cpu_kthread(push_task) || is_migration_disabled(push_task)) {
is_migration_disabled(p) implies rq_has_pinned_tasks(task_rq(p)), right? So having a "simple" if (is_migration_disabled(push_task)) return; would help simpletons like me trying to read through this. > /* > * If this is the idle task on the outgoing CPU try to wake > * up the hotplug control thread which might wait for the > * last task to vanish. The rcuwait_active() check is > * accurate here because the waiter is pinned on this CPU > * and can't obviously be running in parallel. > + * > + * On RT kernels this also has to check whether there are > + * pinned and scheduled out tasks on the runqueue. They > + * need to leave the migrate disabled section first. > */ > - if (!rq->nr_running && rcuwait_active(&rq->hotplug_wait)) { > + if (!rq->nr_running && !rq_has_pinned_tasks(rq) && > + rcuwait_active(&rq->hotplug_wait)) { > raw_spin_unlock(&rq->lock); > rcuwait_wake_up(&rq->hotplug_wait); > raw_spin_lock(&rq->lock);