On 29/08/2019 15:26, Vincent Guittot wrote:
[...]
>> Seeing how much stuff we already do in just computing the stats, do we
>> really save that much by doing this? I'd expect it to be negligible with
>> modern architectures and all of the OoO/voodoo, but maybe I need a
>> refresher course.
>
> We
On Wed, 28 Aug 2019 at 16:19, Valentin Schneider
wrote:
>
> On 26/08/2019 11:11, Vincent Guittot wrote:
> >>> + case group_fully_busy:
> >>> + /*
> >>> + * Select the fully busy group with highest avg_load.
> >>> + * In theory, there is no need to pull
On 26/08/2019 11:11, Vincent Guittot wrote:
>>> + case group_fully_busy:
>>> + /*
>>> + * Select the fully busy group with highest avg_load.
>>> + * In theory, there is no need to pull task from such
>>> + * kind of group because tasks have
On 26/08/2019 10:26, Vincent Guittot wrote:
[...]
>>> busiest group.
>>> - calculate_imbalance() decides what have to be moved.
>>
>> That's nothing new, isn't it? I think what you mean there is that the
>
> There is 2 things:
> -part of the algorithm is new and fixes wrong task placement
>
On Tue, 6 Aug 2019 at 19:17, Valentin Schneider
wrote:
>
> Second batch, get it while it's hot...
>
> On 01/08/2019 15:40, Vincent Guittot wrote:
> [...]
> > @@ -7438,19 +7453,53 @@ static int detach_tasks(struct lb_env *env)
> > if (!can_migrate_task(p, env))
> >
On Tue, 6 Aug 2019 at 17:56, Peter Zijlstra wrote:
>
> On Thu, Aug 01, 2019 at 04:40:20PM +0200, Vincent Guittot wrote:
> > The load_balance algorithm contains some heuristics which have becomes
> > meaningless since the rework of metrics and the introduction of PELT.
> >
> > Furthermore, it's
On Mon, 5 Aug 2019 at 19:07, Valentin Schneider
wrote:
>
> Hi Vincent,
>
> Here's another batch of comments, still need to go through some more of it.
>
> On 01/08/2019 15:40, Vincent Guittot wrote:
> > The load_balance algorithm contains some heuristics which have becomes
>
> s/becomes/become/
On 06/08/2019 18:17, Valentin Schneider wrote:
>> @@ -8765,7 +8942,7 @@ static int load_balance(int this_cpu, struct rq
>> *this_rq,
>> env.src_rq = busiest;
>>
>> ld_moved = 0;
>> -if (busiest->cfs.h_nr_running > 1) {
>> +if (busiest->nr_running > 1) {
>
> Shouldn't that
Second batch, get it while it's hot...
On 01/08/2019 15:40, Vincent Guittot wrote:
[...]
> @@ -7438,19 +7453,53 @@ static int detach_tasks(struct lb_env *env)
> if (!can_migrate_task(p, env))
> goto next;
>
> - load = task_h_load(p);
> +
On Thu, Aug 01, 2019 at 04:40:20PM +0200, Vincent Guittot wrote:
> The load_balance algorithm contains some heuristics which have becomes
> meaningless since the rework of metrics and the introduction of PELT.
>
> Furthermore, it's sometimes difficult to fix wrong scheduling decisions
> because
Hi Vincent,
Here's another batch of comments, still need to go through some more of it.
On 01/08/2019 15:40, Vincent Guittot wrote:
> The load_balance algorithm contains some heuristics which have becomes
s/becomes/become/
> meaningless since the rework of metrics and the introduction of PELT.
The load_balance algorithm contains some heuristics which have becomes
meaningless since the rework of metrics and the introduction of PELT.
Furthermore, it's sometimes difficult to fix wrong scheduling decisions
because everything is based on load whereas some imbalances are not
related to the
12 matches
Mail list logo