On Wed, Jun 22, 2016 at 06:03:18PM +0100, Morten Rasmussen wrote: > Currently, SD_WAKE_AFFINE always takes priority over wakeup balancing if > SD_BALANCE_WAKE is set on the sched_domains. For asymmetric > configurations SD_WAKE_AFFINE is only desirable if the waking task's > compute demand (utilization) is suitable for all the cpu capacities > available within the SD_WAKE_AFFINE sched_domain. If not, let wakeup > balancing take over (find_idlest_{group, cpu}()).
I think I tripped over this one the last time around, and I'm not sure this Changelog is any clearer. This is about the case where the waking cpu and prev_cpu are both in the 'wrong' cluster, right? > This patch makes affine wake-ups conditional on whether both the waker > cpu and prev_cpu has sufficient capacity for the waking task, or not. > > It is assumed that the sched_group(s) containing the waker cpu and > prev_cpu only contain cpu with the same capacity (homogeneous). > > Ideally, we shouldn't set 'want_affine' in the first place, but we don't > know if SD_BALANCE_WAKE is enabled on the sched_domain(s) until we start > traversing them. Is this again more fallout from that weird ASYM_CAP thing? > +static int wake_cap(struct task_struct *p, int cpu, int prev_cpu) > +{ > + long min_cap, max_cap; > + > + min_cap = min(capacity_orig_of(prev_cpu), capacity_orig_of(cpu)); > + max_cap = cpu_rq(cpu)->rd->max_cpu_capacity; > + > + /* Minimum capacity is close to max, no need to abort wake_affine */ > + if (max_cap - min_cap < max_cap >> 3) > + return 0; > + > + return min_cap * 1024 < task_util(p) * capacity_margin; > +} I'm most puzzled by these inequalities, how, why ? I would figure you'd compare task_util to the current remaining util of the small group, and if it fits, place it there. This seems to do something entirely different.