On Mon, 26 Oct 2020 16:42:14 +0100
Vincent Guittot <vincent.guit...@linaro.org> wrote:
> On Mon, 26 Oct 2020 at 16:04, Rik van Riel <r...@surriel.com> wrote:

> > Could utilization estimates be off, either lagging or
> > simply having a wrong estimate for a task, resulting
> > in no task getting pulled sometimes, while doing a
> > migrate_task imbalance always moves over something?  
> 
> task and cpu utilization are not always up to fully synced and may lag
> a bit which explains that sometimes LB can fail to migrate for a small
> diff

OK, running with this little snippet below, I see latencies
improve back to near where they used to be:

Latency percentiles (usec) runtime 150 (s)
        50.0th: 13
        75.0th: 31
        90.0th: 69
        95.0th: 90
        *99.0th: 761
        99.5th: 2268
        99.9th: 9104
        min=1, max=16158

I suspect the right/cleaner approach might be to use
migrate_task more in !CPU_NOT_IDLE cases?

Running a task to an idle CPU immediately, instead of refusing
to have the load balancer move it, improves latencies for fairly
obvious reasons.

I am not entirely clear on why the load balancer should need to
be any more conservative about moving tasks than the wakeup
path is in eg. select_idle_sibling.


diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 35bdc0cccfa6..60acf71a2d39 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -7415,7 +7415,7 @@ static int detach_tasks(struct lb_env *env)
                case migrate_util:
                        util = task_util_est(p);
 
-                       if (util > env->imbalance)
+                       if (util > env->imbalance && env->idle == CPU_NOT_IDLE)
                                goto next;
 
                        env->imbalance -= util;

Reply via email to