On 1 April 2015 at 16:54, Xunlei Pang <pang.xun...@linaro.org> wrote: > Hi Vincent, > > On 1 April 2015 at 17:06, Vincent Guittot <vincent.guit...@linaro.org> wrote: >> On 1 April 2015 at 05:37, Xunlei Pang <pang.xun...@linaro.org> wrote: >>> Hi Vincent, >>> >>> On 27 March 2015 at 23:59, Vincent Guittot <vincent.guit...@linaro.org> >>> wrote: >>>> On 27 March 2015 at 15:52, Xunlei Pang <pang.xun...@linaro.org> wrote: >>>>> Hi Vincent, >>>>> >>>>> On 27 February 2015 at 23:54, Vincent Guittot >>>>> <vincent.guit...@linaro.org> wrote: >>>>>> /** >>>>>> @@ -6432,18 +6435,19 @@ static inline void update_sd_lb_stats(struct >>>>>> lb_env *env, struct sd_lb_stats *sd >>>>>> >>>>>> /* >>>>>> * In case the child domain prefers tasks go to siblings >>>>>> - * first, lower the sg capacity factor to one so that >>>>>> we'll try >>>>>> + * first, lower the sg capacity so that we'll try >>>>>> * and move all the excess tasks away. We lower the >>>>>> capacity >>>>>> * of a group only if the local group has the capacity >>>>>> to fit >>>>>> - * these excess tasks, i.e. nr_running < >>>>>> group_capacity_factor. The >>>>>> - * extra check prevents the case where you always pull >>>>>> from the >>>>>> - * heaviest group when it is already under-utilized >>>>>> (possible >>>>>> - * with a large weight task outweighs the tasks on the >>>>>> system). >>>>>> + * these excess tasks. The extra check prevents the case >>>>>> where >>>>>> + * you always pull from the heaviest group when it is >>>>>> already >>>>>> + * under-utilized (possible with a large weight task >>>>>> outweighs >>>>>> + * the tasks on the system). >>>>>> */ >>>>>> if (prefer_sibling && sds->local && >>>>>> - sds->local_stat.group_has_free_capacity) { >>>>>> - sgs->group_capacity_factor = >>>>>> min(sgs->group_capacity_factor, 1U); >>>>>> - sgs->group_type = group_classify(sg, sgs); >>>>>> + group_has_capacity(env, &sds->local_stat) && >>>>>> + (sgs->sum_nr_running > 1)) { >>>>>> + sgs->group_no_capacity = 1; >>>>>> + sgs->group_type = group_overloaded; >>>>>> } >>>>>> >>>>> >>>>> For SD_PREFER_SIBLING, if local has 1 task and group_has_capacity() >>>>> returns true(but not overloaded) for it, and assume sgs group has 2 >>>>> tasks, should we still mark this group overloaded? >>>> >>>> yes, the load balance will then choose if it's worth pulling it or not >>>> depending of the load of each groups >>> >>> Maybe I didn't make it clearly. >>> For example, CPU0~1 are SMT siblings, CPU2~CPU3 are another pair. >>> CPU0 is idle, others each has 1 task. Then according to this patch, >>> CPU2~CPU3(as one group) will be viewed as overloaded(CPU0~CPU1 as >>> local group, and group_has_capacity() returns true here), so the >>> balancer may initiate an active task moving. This is different from >>> the current code as SD_PREFER_SIBLING logic does. Is this problematic? >> >> IMHO, it's not problematic, It's worth triggering a load balance if >> there is an imbalance between the 2 groups (as an example CPU0~1 has >> one low nice prio task but CPU1~2 have 2 high nice prio tasks) so the >> decision will be done when calculating the imbalance > > Yes, but assuming the balancer calculated some imbalance, after moving > like CPU0~CPU1 have 1 low prio task and 1 high prio task, CPU2~CPU3 > have 1 high piro task, seems it does no good because there's only 1 > task per CPU after all.
In this condition i agree that it doesn't worth to move a task and the scheduler should not because the imbalance will be too small to found a busiest queue. So the decision is more linked to the weighted load of the tasks than to the number of tasks > > So, is code below better( we may have more than 2 SMT siblings, like > Broadcom XLP processor having 4 SMT per core)? > if (prefer_sibling && sds->local && > group_has_capacity(env, &sds->local_stat) && > (sgs->sum_nr_running > sds->local_stat.sum_nr_running + 1)) { > sgs->group_no_capacity = 1; > sgs->group_type = group_overloaded; > } I would say no because it mainly depends of the weighted load of the tasks so calculate_imbalance is the right place Vincent > > Thanks, > -Xunlei -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/