On Tue, 6 Apr 2021 at 06:11, Ricardo Neri
<[email protected]> wrote:
>
> Introduce arch_sched_asym_prefer_early() so that architectures with SMT
> can delay the decision to label a candidate busiest group as
> group_asym_packing.
>
> When using asymmetric packing, high priority idle CPUs pull tasks from
> scheduling groups with low priority CPUs. The decision on using asymmetric
> packing for load balancing is done after collecting the statistics of a
> candidate busiest group. However, this decision needs to consider the
> state of SMT siblings of dst_cpu.
>
> Cc: Aubrey Li <[email protected]>
> Cc: Ben Segall <[email protected]>
> Cc: Daniel Bristot de Oliveira <[email protected]>
> Cc: Dietmar Eggemann <[email protected]>
> Cc: Joel Fernandes (Google) <[email protected]>
> Cc: Mel Gorman <[email protected]>
> Cc: Quentin Perret <[email protected]>
> Cc: Srinivas Pandruvada <[email protected]>
> Cc: Steven Rostedt <[email protected]>
> Cc: Tim Chen <[email protected]>
> Reviewed-by: Len Brown <[email protected]>
> Signed-off-by: Ricardo Neri <[email protected]>
> ---
>  include/linux/sched/topology.h |  1 +
>  kernel/sched/fair.c            | 11 ++++++++++-
>  2 files changed, 11 insertions(+), 1 deletion(-)
>
> diff --git a/include/linux/sched/topology.h b/include/linux/sched/topology.h
> index 8f0f778b7c91..663b98959305 100644
> --- a/include/linux/sched/topology.h
> +++ b/include/linux/sched/topology.h
> @@ -57,6 +57,7 @@ static inline int cpu_numa_flags(void)
>  #endif
>
>  extern int arch_asym_cpu_priority(int cpu);
> +extern bool arch_sched_asym_prefer_early(int a, int b);
>
>  struct sched_domain_attr {
>         int relax_domain_level;
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 4ef3fa0d5e8d..e74da853b046 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -106,6 +106,15 @@ int __weak arch_asym_cpu_priority(int cpu)
>         return -cpu;
>  }
>
> +/*
> + * For asym packing, early check if CPUs with higher priority should be
> + * preferred. On some architectures, more data is needed to make a decision.
> + */
> +bool __weak arch_sched_asym_prefer_early(int a, int b)
> +{
> +       return sched_asym_prefer(a, b);
> +}
> +
>  /*
>   * The margin used when comparing utilization with CPU capacity.
>   *
> @@ -8458,7 +8467,7 @@ static inline void update_sg_lb_stats(struct lb_env 
> *env,
>         if (!local_group && env->sd->flags & SD_ASYM_PACKING &&
>             env->idle != CPU_NOT_IDLE &&
>             sgs->sum_h_nr_running &&
> -           sched_asym_prefer(env->dst_cpu, group->asym_prefer_cpu)) {
> +           arch_sched_asym_prefer_early(env->dst_cpu, 
> group->asym_prefer_cpu)) {

If itmt set arch_sched_asym_prefer_early to true all groups will be
set as group_asym_packing unconditionally which is wrong. The state
has to be set only when we want asym packing migration

>                 sgs->group_asym_packing = 1;
>         }
>
> --
> 2.17.1
>

Reply via email to