When check_asym_packing() is triggered, the imbalance is set to : busiest_stat.avg_load * busiest_stat.group_capacity / SCHED_CAPACITY_SCALE But busiest_stat.avg_load equals sgs->group_load *SCHED_CAPACITY_SCALE / sgs->group_capacity These divisions can generate a rounding that will make imbalance slightly lower than the weighted load of the cfs_rq. But this is enough to skip the rq in find_busiest_queue and prevents asym migration to happen.
Directly set imbalance to sgs->group_load to remove the rounding. Signed-off-by: Vincent Guittot <vincent.guit...@linaro.org> --- kernel/sched/fair.c | 4 +--- 1 file changed, 1 insertion(+), 3 deletions(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index ca46964..9b31247 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -8476,9 +8476,7 @@ static int check_asym_packing(struct lb_env *env, struct sd_lb_stats *sds) if (sched_asym_prefer(busiest_cpu, env->dst_cpu)) return 0; - env->imbalance = DIV_ROUND_CLOSEST( - sds->busiest_stat.avg_load * sds->busiest_stat.group_capacity, - SCHED_CAPACITY_SCALE); + env->imbalance = sds->busiest_stat.avg_load; return 1; } -- 2.7.4