On Wed, 31 Jul 2019 at 15:44, Srikar Dronamraju
wrote:
>
> * Vincent Guittot [2019-07-26 16:42:53]:
>
> > On Fri, 26 Jul 2019 at 15:59, Srikar Dronamraju
> > wrote:
> > > > @@ -7361,19 +7357,46 @@ static int detach_tasks(struct lb_env *env)
> > > > if (!can_migrate_task(p, env))
>
* Vincent Guittot [2019-07-26 16:42:53]:
> On Fri, 26 Jul 2019 at 15:59, Srikar Dronamraju
> wrote:
> > > @@ -7361,19 +7357,46 @@ static int detach_tasks(struct lb_env *env)
> > > if (!can_migrate_task(p, env))
> > > goto next;
> > >
> > > - load =
On 26/07/2019 15:47, Vincent Guittot wrote:
[...]
>> If CPU0 runs the load balancer, balancing utilization would mean pulling
>> 2 tasks from CPU1 to reach the domain-average of 40%. The good side of this
>> is that we could save ourselves from running some newidle balances, but
>> I'll admit that'
On Fri, 26 Jul 2019 at 16:01, Valentin Schneider
wrote:
>
> On 26/07/2019 13:30, Vincent Guittot wrote:
> >> We can avoid this entirely by going straight for an active balance when
> >> we are balancing misfit tasks (which we really should be doing TBH).
> >
> > but your misfit task might not be t
On Fri, 26 Jul 2019 at 15:59, Srikar Dronamraju
wrote:
>
> >
> > The type of sched_group has been extended to better reflect the type of
> > imbalance. We now have :
> > group_has_spare
> > group_fully_busy
> > group_misfit_task
> > group_asym_capacity
> > group_imbal
On 26/07/2019 14:58, Srikar Dronamraju wrote:
[...]
>> @@ -8357,72 +8318,115 @@ static inline void calculate_imbalance(struct
>> lb_env *env, struct sd_lb_stats *s
>> if (busiest->group_type == group_imbalanced) {
>> /*
>> * In the group_imb case we cannot rely on g
On 26/07/2019 13:30, Vincent Guittot wrote:
>> We can avoid this entirely by going straight for an active balance when
>> we are balancing misfit tasks (which we really should be doing TBH).
>
> but your misfit task might not be the running one anymore when
> load_balance effectively happens
>
W
>
> The type of sched_group has been extended to better reflect the type of
> imbalance. We now have :
> group_has_spare
> group_fully_busy
> group_misfit_task
> group_asym_capacity
> group_imbalanced
> group_overloaded
How is group_fully_busy different from gr
On Fri, 26 Jul 2019 at 12:41, Valentin Schneider
wrote:
>
> On 26/07/2019 10:01, Vincent Guittot wrote:
> >> Huh, interesting. Why go for utilization?
> >
> > Mainly because that's what is used to detect a misfit task and not the load
> >
> >>
> >> Right now we store the load of the task and use i
On 26/07/2019 10:01, Vincent Guittot wrote:
>> Huh, interesting. Why go for utilization?
>
> Mainly because that's what is used to detect a misfit task and not the load
>
>>
>> Right now we store the load of the task and use it to pick the "biggest"
>> misfit (in terms of load) when there are mor
On Thu, 25 Jul 2019 at 19:17, Valentin Schneider
wrote:
>
> Hi Vincent,
>
> first batch of questions/comments here...
>
> On 19/07/2019 08:58, Vincent Guittot wrote:
> [...]
> > kernel/sched/fair.c | 539
> >
> > 1 file changed, 289 insertions
Hi Vincent,
first batch of questions/comments here...
On 19/07/2019 08:58, Vincent Guittot wrote:
[...]
> kernel/sched/fair.c | 539
>
> 1 file changed, 289 insertions(+), 250 deletions(-)
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/
On Fri, Jul 19, 2019 at 04:02:15PM +0200, Vincent Guittot wrote:
> On Fri, 19 Jul 2019 at 14:54, Peter Zijlstra wrote:
> >
> > On Fri, Jul 19, 2019 at 09:58:23AM +0200, Vincent Guittot wrote:
> > > -void fix_small_imbalance(struct lb_env *env, struct sd_lb_stats *sds)
> > Maybe strip this out fi
On Fri, 19 Jul 2019 at 15:12, Peter Zijlstra wrote:
>
> On Fri, Jul 19, 2019 at 09:58:23AM +0200, Vincent Guittot wrote:
>
> > @@ -8029,17 +8063,24 @@ static inline void update_sg_lb_stats(struct lb_env
> > *env,
> > }
> > }
> >
> > - /* Adjust by relative CPU capacity of
On Fri, 19 Jul 2019 at 14:54, Peter Zijlstra wrote:
>
> On Fri, Jul 19, 2019 at 09:58:23AM +0200, Vincent Guittot wrote:
>
> > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> > index 67f0acd..472959df 100644
> > --- a/kernel/sched/fair.c
> > +++ b/kernel/sched/fair.c
> > @@ -5376,18 +5376
On Fri, 19 Jul 2019 at 15:06, Peter Zijlstra wrote:
>
> On Fri, Jul 19, 2019 at 09:58:23AM +0200, Vincent Guittot wrote:
> > @@ -7887,7 +7908,7 @@ static inline int sg_imbalanced(struct sched_group
> > *group)
> > static inline bool
> > group_has_capacity(struct lb_env *env, struct sg_lb_stats
On Fri, 19 Jul 2019 at 15:22, Peter Zijlstra wrote:
>
> On Fri, Jul 19, 2019 at 09:58:23AM +0200, Vincent Guittot wrote:
> > enum group_type {
> > - group_other = 0,
> > + group_has_spare = 0,
> > + group_fully_busy,
> > group_misfit_task,
> > + group_asym_capacity,
> >
On Fri, 19 Jul 2019 at 14:52, Peter Zijlstra wrote:
>
> On Fri, Jul 19, 2019 at 09:58:23AM +0200, Vincent Guittot wrote:
> > @@ -7060,12 +7048,21 @@ static unsigned long __read_mostly
> > max_load_balance_interval = HZ/10;
> > enum fbq_type { regular, remote, all };
> >
> > enum group_type {
>
On Fri, Jul 19, 2019 at 09:58:23AM +0200, Vincent Guittot wrote:
> enum group_type {
> - group_other = 0,
> + group_has_spare = 0,
> + group_fully_busy,
> group_misfit_task,
> + group_asym_capacity,
> group_imbalanced,
> group_overloaded,
> };
The order of this
On Fri, Jul 19, 2019 at 09:58:23AM +0200, Vincent Guittot wrote:
> @@ -8029,17 +8063,24 @@ static inline void update_sg_lb_stats(struct lb_env
> *env,
> }
> }
>
> - /* Adjust by relative CPU capacity of the group */
> - sgs->group_capacity = group->sgc->capacity;
> -
On Fri, Jul 19, 2019 at 09:58:23AM +0200, Vincent Guittot wrote:
> @@ -7887,7 +7908,7 @@ static inline int sg_imbalanced(struct sched_group
> *group)
> static inline bool
> group_has_capacity(struct lb_env *env, struct sg_lb_stats *sgs)
> {
> - if (sgs->sum_h_nr_running < sgs->group_weight)
On Fri, Jul 19, 2019 at 09:58:23AM +0200, Vincent Guittot wrote:
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 67f0acd..472959df 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -5376,18 +5376,6 @@ static unsigned long capacity_of(int cpu)
> return cpu_r
On Fri, Jul 19, 2019 at 09:58:23AM +0200, Vincent Guittot wrote:
> @@ -7060,12 +7048,21 @@ static unsigned long __read_mostly
> max_load_balance_interval = HZ/10;
> enum fbq_type { regular, remote, all };
>
> enum group_type {
> - group_other = 0,
> + group_has_spare = 0,
> + group
The load_balance algorithm contains some heuristics which have becomes
meaningless since the rework of metrics and the introduction of PELT.
Furthermore, it's sometimes difficult to fix wrong scheduling decisions
because everything is based on load whereas some imbalances are not
related to the lo
24 matches
Mail list logo