Hi Morten, On 06/22/2016 10:03 AM, Morten Rasmussen wrote:
[...] > > +/* > + * group_smaller_cpu_capacity: Returns true if sched_group sg has smaller > + * per-cpu capacity than sched_group ref. > + */ > +static inline bool > +group_smaller_cpu_capacity(struct sched_group *sg, struct sched_group *ref) > +{ > + return sg->sgc->max_capacity * capacity_margin < > + ref->sgc->max_capacity * 1024; > +} > + > static inline enum > group_type group_classify(struct sched_group *group, > struct sg_lb_stats *sgs) > @@ -6892,6 +6903,19 @@ static bool update_sd_pick_busiest(struct lb_env *env, > if (sgs->avg_load <= busiest->avg_load) > return false; > > + if (!(env->sd->flags & SD_ASYM_CPUCAPACITY)) > + goto asym_packing; > + > + /* Candidate sg has no more than one task per cpu and has > + * higher per-cpu capacity. Migrating tasks to less capable > + * cpus may harm throughput. Maximize throughput, > + * power/energy consequences are not considered. > + */ > + if (sgs->sum_nr_running <= sgs->group_weight && > + group_smaller_cpu_capacity(sds->local, sg)) > + return false; > + > +asym_packing: What about the case where IRQ/RT work reduces the capacity of some of these bigger CPUs? sgc->max_capacity might not necessarily capture that case. Thanks, -Sai