>> then the above no_node-load_balance thing suffers a small-ish dip at 320
>> tasks, yeah.
> 
> No no, that's not restricted to one node.  It's just overloaded because
> I turned balancing off at the NODE domain level.
> 
>> And AFAICR, the effect of disabling boosting will be visible in the
>> small count tasks cases anyway because if you saturate the cores with
>> tasks, the boosting algorithms tend to get the box out of boosting for
>> the simple reason that the power/perf headroom simply disappears due to
>> the SOC being busy.
>>
>>> 640     100294.8        98      38.7    570.9   2.6118
>>> 1280    115998.2        97      66.9    1132.8  1.5104
>>> 2560    125820.0        97      123.3   2256.6  0.8191
>>
>> I dunno about those. maybe this is expected with so many tasks or do we
>> want to optimize that case further?
> 
> When using all 4 nodes properly, that's still scaling.  Here, I

Without node regular balancing, only waking balance left in
select_task_rq_fair for aim7 testing, (I just assume you used shared
workfile, most of testing is cpu density and only few exec/fork load).

Since, waking balance just happened in same llc domain. guess that is
the reason for this.

> intentionally screwed up balancing to watch the low end.  High end is
> expected wreckage.


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to