On 07/01/21 11:33, Vincent Guittot wrote: > Active balance is triggered for a number of voluntary cases like misfit > or pinned tasks cases but also after that a number of load balance > attempts failed to migrate a task. There is no need to use active load > balance when the group is overloaded because an overloaded state means > that there is at least one waiting task. Nevertheless, the waiting task > is not selected and detached until the threshold becomes higher than its > load. This threshold increases with the number of failed lb (see the > condition if ((load >> env->sd->nr_balance_failed) > env->imbalance) in > detach_tasks()) and the waiting task will end up to be selected after a > number of attempts. > > Signed-off-by: Vincent Guittot <vincent.guit...@linaro.org> > --- > kernel/sched/fair.c | 45 +++++++++++++++++++++++---------------------- > 1 file changed, 23 insertions(+), 22 deletions(-) > > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c > index a3515dea1afc..00ec5b901188 100644 > --- a/kernel/sched/fair.c > +++ b/kernel/sched/fair.c > @@ -9499,13 +9499,32 @@ asym_active_balance(struct lb_env *env) > } > > static inline bool > -voluntary_active_balance(struct lb_env *env) > +imbalanced_active_balance(struct lb_env *env) > +{ > + struct sched_domain *sd = env->sd; > + > + /* > + * The imbalanced case includes the case of pinned tasks preventing a > fair > + * distribution of the load on the system but also the even > distribution of the > + * threads on a system with spare capacity > + */
Do you mean s/imbalanced/migrate_task/? This part here will affect group_imbalanced, group_asym_packing, and some others. > + if ((env->migration_type == migrate_task) && > + (sd->nr_balance_failed > sd->cache_nice_tries+2)) > + return 1; > + > + return 0; > +} > +