dl_add_task_root_domain() is called during sched domain rebuild:

  rebuild_sched_domains_locked()
    partition_and_rebuild_sched_domains()
      rebuild_root_domains()
         for all top_cpuset descendants:
           update_tasks_root_domain()
             for all tasks of cpuset:
               dl_add_task_root_domain()

Change it so that only the task pi lock is taken to check if the task
has a SCHED_DEADLINE (DL) policy. In case that p is a DL task take the
rq lock as well to be able to safely de-reference root domain's DL
bandwidth structure.

Most of the tasks will have another policy (namely SCHED_NORMAL) and
can now bail without taking the rq lock.

One thing to note here: Even in case that there aren't any DL user
tasks, a slow frequency switching system with cpufreq gov schedutil has
a DL task (sugov) per frequency domain running which participates in DL
bandwidth management.

Reviewed-by: Quentin Perret <[email protected]>
Signed-off-by: Dietmar Eggemann <[email protected]>
---

The use case in which this makes a noticeable difference is Android's
'CPU pause' power management feature.

It uses CPU hotplug control to clear a CPU from active state to force
all threads which are not per-cpu kthreads away from this CPU.

Making DL bandwidth management faster during sched domain rebuild helps
to reduce the time to pause/un-pause a CPU.

 kernel/sched/deadline.c | 11 +++++++----
 1 file changed, 7 insertions(+), 4 deletions(-)

diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
index 5421782fe897..c7b1a63a053b 100644
--- a/kernel/sched/deadline.c
+++ b/kernel/sched/deadline.c
@@ -2409,9 +2409,13 @@ void dl_add_task_root_domain(struct task_struct *p)
        struct rq *rq;
        struct dl_bw *dl_b;
 
-       rq = task_rq_lock(p, &rf);
-       if (!dl_task(p))
-               goto unlock;
+       raw_spin_lock_irqsave(&p->pi_lock, rf.flags);
+       if (!dl_task(p)) {
+               raw_spin_unlock_irqrestore(&p->pi_lock, rf.flags);
+               return;
+       }
+
+       rq = __task_rq_lock(p, &rf);
 
        dl_b = &rq->rd->dl_bw;
        raw_spin_lock(&dl_b->lock);
@@ -2420,7 +2424,6 @@ void dl_add_task_root_domain(struct task_struct *p)
 
        raw_spin_unlock(&dl_b->lock);
 
-unlock:
        task_rq_unlock(rq, p, &rf);
 }
 
-- 
2.25.1

Reply via email to