On 22 June 2016 at 19:03, Morten Rasmussen <morten.rasmus...@arm.com> wrote:
> Currently, SD_WAKE_AFFINE always takes priority over wakeup balancing if
> SD_BALANCE_WAKE is set on the sched_domains. For asymmetric
> configurations SD_WAKE_AFFINE is only desirable if the waking task's
> compute demand (utilization) is suitable for all the cpu capacities
> available within the SD_WAKE_AFFINE sched_domain. If not, let wakeup

instead of "suitable for all the cpu capacities available within the
SD_WAKE_AFFINE sched_domain", should it be "suitable for local cpu and
prev cpu" becasue you only check the capacity of these 2 CPUs.

Other than this comment for the commit message, the patch looks good to me
Acked-by: Vincent Guittot <vincent.guit...@linaro.org>

> balancing take over (find_idlest_{group, cpu}()).
>
> This patch makes affine wake-ups conditional on whether both the waker
> cpu and prev_cpu has sufficient capacity for the waking task, or not.
>
> It is assumed that the sched_group(s) containing the waker cpu and
> prev_cpu only contain cpu with the same capacity (homogeneous).
>
> Ideally, we shouldn't set 'want_affine' in the first place, but we don't
> know if SD_BALANCE_WAKE is enabled on the sched_domain(s) until we start
> traversing them.
>
> cc: Ingo Molnar <mi...@redhat.com>
> cc: Peter Zijlstra <pet...@infradead.org>
>
> Signed-off-by: Morten Rasmussen <morten.rasmus...@arm.com>
> ---
>  kernel/sched/fair.c | 28 +++++++++++++++++++++++++++-
>  1 file changed, 27 insertions(+), 1 deletion(-)
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 216db302e87d..dba02c7b57b3 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -114,6 +114,12 @@ unsigned int __read_mostly sysctl_sched_shares_window = 
> 10000000UL;
>  unsigned int sysctl_sched_cfs_bandwidth_slice = 5000UL;
>  #endif
>
> +/*
> + * The margin used when comparing utilization with cpu capacity:
> + * util * 1024 < capacity * margin
> + */
> +unsigned int capacity_margin = 1280; /* ~20% */
> +
>  static inline void update_load_add(struct load_weight *lw, unsigned long inc)
>  {
>         lw->weight += inc;
> @@ -5260,6 +5266,25 @@ static int cpu_util(int cpu)
>         return (util >= capacity) ? capacity : util;
>  }
>
> +static inline int task_util(struct task_struct *p)
> +{
> +       return p->se.avg.util_avg;
> +}
> +
> +static int wake_cap(struct task_struct *p, int cpu, int prev_cpu)
> +{
> +       long min_cap, max_cap;
> +
> +       min_cap = min(capacity_orig_of(prev_cpu), capacity_orig_of(cpu));
> +       max_cap = cpu_rq(cpu)->rd->max_cpu_capacity;
> +
> +       /* Minimum capacity is close to max, no need to abort wake_affine */
> +       if (max_cap - min_cap < max_cap >> 3)
> +               return 0;
> +
> +       return min_cap * 1024 < task_util(p) * capacity_margin;
> +}
> +
>  /*
>   * select_task_rq_fair: Select target runqueue for the waking task in domains
>   * that have the 'sd_flag' flag set. In practice, this is SD_BALANCE_WAKE,
> @@ -5283,7 +5308,8 @@ select_task_rq_fair(struct task_struct *p, int 
> prev_cpu, int sd_flag, int wake_f
>
>         if (sd_flag & SD_BALANCE_WAKE) {
>                 record_wakee(p);
> -               want_affine = !wake_wide(p) && cpumask_test_cpu(cpu, 
> tsk_cpus_allowed(p));
> +               want_affine = !wake_wide(p) && !wake_cap(p, cpu, prev_cpu)
> +                             && cpumask_test_cpu(cpu, tsk_cpus_allowed(p));
>         }
>
>         rcu_read_lock();
> --
> 1.9.1
>

Reply via email to