On 10/16/19 5:26 PM, Vincent Guittot wrote:
> On Wed, 16 Oct 2019 at 09:21, Parth Shah wrote:
>>
>>
>>
>> On 9/19/19 1:03 PM, Vincent Guittot wrote:
>>
>> [...]
>>
>>> Signed-off-by: Vincent Guittot
>>> ---
>>> kernel/sched/fair.c | 585
>>> ++-
On Wed, 16 Oct 2019 at 09:21, Parth Shah wrote:
>
>
>
> On 9/19/19 1:03 PM, Vincent Guittot wrote:
>
> [...]
>
> > Signed-off-by: Vincent Guittot
> > ---
> > kernel/sched/fair.c | 585
> > ++--
> > 1 file changed, 380 insertions(+), 205 deletions(
On 9/19/19 1:03 PM, Vincent Guittot wrote:
[...]
> Signed-off-by: Vincent Guittot
> ---
> kernel/sched/fair.c | 585
> ++--
> 1 file changed, 380 insertions(+), 205 deletions(-)
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> ind
On Tue, 8 Oct 2019 at 19:55, Peter Zijlstra wrote:
>
> On Thu, Sep 19, 2019 at 09:33:35AM +0200, Vincent Guittot wrote:
> > + if (busiest->group_type == group_asym_packing) {
> > + /*
> > + * In case of asym capacity, we will try to migrate all load
> > to
> > +
On Tue, 8 Oct 2019 at 19:39, Peter Zijlstra wrote:
>
> On Tue, Oct 08, 2019 at 05:30:02PM +0200, Vincent Guittot wrote:
>
> > This is how I plan to get ride of the problem:
> > + if (busiest->group_weight == 1 || sds->prefer_sibling) {
> > + unsigned int nr_diff = b
On Thu, Sep 19, 2019 at 09:33:35AM +0200, Vincent Guittot wrote:
> + if (busiest->group_type == group_asym_packing) {
> + /*
> + * In case of asym capacity, we will try to migrate all load to
> + * the preferred CPU.
> + */
> + env-
On 08/10/2019 17:39, Valentin Schneider wrote:
>>
>> But -fno-strict-overflow mandates 2s complement for all such signed
>> issues.
>>
>
> So then there really shouldn't be any ambiguity. I have no idea if
> -fno-strict-overflow then also lifts the undefinedness of the right shifts,
> gotta get my
On Tue, Oct 08, 2019 at 05:30:02PM +0200, Vincent Guittot wrote:
> This is how I plan to get ride of the problem:
> + if (busiest->group_weight == 1 || sds->prefer_sibling) {
> + unsigned int nr_diff = busiest->sum_h_nr_running;
> + /*
> +
On 08/10/2019 17:33, Peter Zijlstra wrote:
> On Tue, Oct 08, 2019 at 03:34:04PM +0100, Valentin Schneider wrote:
>> On 08/10/2019 15:16, Peter Zijlstra wrote:
>>> On Wed, Oct 02, 2019 at 11:47:59AM +0100, Valentin Schneider wrote:
>>>
Yeah, right shift on signed negative values are implementat
On Tue, Oct 08, 2019 at 03:34:04PM +0100, Valentin Schneider wrote:
> On 08/10/2019 15:16, Peter Zijlstra wrote:
> > On Wed, Oct 02, 2019 at 11:47:59AM +0100, Valentin Schneider wrote:
> >
> >> Yeah, right shift on signed negative values are implementation defined.
> >
> > Seriously? Even under -
On 08/10/2019 16:30, Vincent Guittot wrote:
[...]
>
> This is how I plan to get ride of the problem:
> + if (busiest->group_weight == 1 || sds->prefer_sibling) {
> + unsigned int nr_diff = busiest->sum_h_nr_running;
> + /*
> +
Le Tuesday 08 Oct 2019 à 15:34:04 (+0100), Valentin Schneider a écrit :
> On 08/10/2019 15:16, Peter Zijlstra wrote:
> > On Wed, Oct 02, 2019 at 11:47:59AM +0100, Valentin Schneider wrote:
> >
> >> Yeah, right shift on signed negative values are implementation defined.
> >
> > Seriously? Even und
On 08/10/2019 15:16, Peter Zijlstra wrote:
> On Wed, Oct 02, 2019 at 11:47:59AM +0100, Valentin Schneider wrote:
>
>> Yeah, right shift on signed negative values are implementation defined.
>
> Seriously? Even under -fno-strict-overflow? There is a perfectly
> sensible operation for signed shift
On Wed, Oct 02, 2019 at 11:47:59AM +0100, Valentin Schneider wrote:
> Yeah, right shift on signed negative values are implementation defined.
Seriously? Even under -fno-strict-overflow? There is a perfectly
sensible operation for signed shift right, this stuff should not be
undefined.
On Wed, Oct 02, 2019 at 11:21:20AM +0200, Dietmar Eggemann wrote:
> I thought we should always order local variable declarations from
> longest to shortest line but can't find this rule in coding-style.rst
> either.
You're right though, that is generally encouraged. From last years
(2018) KS ther
On 02/10/2019 09:30, Vincent Guittot wrote:
>> Isn't that one somewhat risky?
>>
>> Say both groups are classified group_has_spare and we do prefer_sibling.
>> We'd select busiest as the one with the maximum number of busy CPUs, but it
>> could be so that busiest.sum_h_nr_running < local.sum_h_nr_r
On 02/10/2019 10:23, Vincent Guittot wrote:
> On Tue, 1 Oct 2019 at 18:53, Dietmar Eggemann
> wrote:
>>
>> On 01/10/2019 10:14, Vincent Guittot wrote:
>>> On Mon, 30 Sep 2019 at 18:24, Dietmar Eggemann
>>> wrote:
Hi Vincent,
On 19/09/2019 09:33, Vincent Guittot wrote:
>>
> [
On 02/10/2019 08:44, Vincent Guittot wrote:
> On Tue, 1 Oct 2019 at 18:53, Dietmar Eggemann
> wrote:
>>
>> On 01/10/2019 10:14, Vincent Guittot wrote:
>>> On Mon, 30 Sep 2019 at 18:24, Dietmar Eggemann
>>> wrote:
Hi Vincent,
On 19/09/2019 09:33, Vincent Guittot wrote:
>>
>>
On Tue, 1 Oct 2019 at 19:47, Valentin Schneider
wrote:
>
> On 19/09/2019 08:33, Vincent Guittot wrote:
>
> [...]
>
> > @@ -8283,69 +8363,133 @@ static inline void update_sd_lb_stats(struct
> > lb_env *env, struct sd_lb_stats *sd
> > */
> > static inline void calculate_imbalance(struct lb_env *
On Tue, 1 Oct 2019 at 18:53, Dietmar Eggemann wrote:
>
> On 01/10/2019 10:14, Vincent Guittot wrote:
> > On Mon, 30 Sep 2019 at 18:24, Dietmar Eggemann
> > wrote:
> >>
> >> Hi Vincent,
> >>
> >> On 19/09/2019 09:33, Vincent Guittot wrote:
>
[...]
>
> >>> + if (busiest->group_weight
On Tue, 1 Oct 2019 at 18:53, Dietmar Eggemann wrote:
>
> On 01/10/2019 10:14, Vincent Guittot wrote:
> > On Mon, 30 Sep 2019 at 18:24, Dietmar Eggemann
> > wrote:
> >>
> >> Hi Vincent,
> >>
> >> On 19/09/2019 09:33, Vincent Guittot wrote:
>
> [...]
>
> >>> @@ -7347,7 +7362,7 @@ static int detach
On 19/09/2019 08:33, Vincent Guittot wrote:
[...]
> @@ -8283,69 +8363,133 @@ static inline void update_sd_lb_stats(struct lb_env
> *env, struct sd_lb_stats *sd
> */
> static inline void calculate_imbalance(struct lb_env *env, struct
> sd_lb_stats *sds)
> {
> - unsigned long max_pull, lo
On 01/10/2019 11:14, Vincent Guittot wrote:
> group_asym_packing
>
> On Tue, 1 Oct 2019 at 10:15, Dietmar Eggemann
> wrote:
>>
>> On 19/09/2019 09:33, Vincent Guittot wrote:
>>
>>
>> [...]
>>
>>> @@ -8042,14 +8104,24 @@ static inline void update_sg_lb_stats(struct lb_env
>>> *env,
>>>
On 01/10/2019 10:14, Vincent Guittot wrote:
> On Mon, 30 Sep 2019 at 18:24, Dietmar Eggemann
> wrote:
>>
>> Hi Vincent,
>>
>> On 19/09/2019 09:33, Vincent Guittot wrote:
[...]
>>> @@ -7347,7 +7362,7 @@ static int detach_tasks(struct lb_env *env)
>>> {
>>> struct list_head *tasks = &en
group_asym_packing
On Tue, 1 Oct 2019 at 10:15, Dietmar Eggemann wrote:
>
> On 19/09/2019 09:33, Vincent Guittot wrote:
>
>
> [...]
>
> > @@ -8042,14 +8104,24 @@ static inline void update_sg_lb_stats(struct lb_env
> > *env,
> > }
> > }
> >
> > - /* Adjust by relative CPU
On 19/09/2019 09:33, Vincent Guittot wrote:
[...]
> @@ -8042,14 +8104,24 @@ static inline void update_sg_lb_stats(struct lb_env
> *env,
> }
> }
>
> - /* Adjust by relative CPU capacity of the group */
> + /* Check if dst cpu is idle and preferred to this group */
>
On Mon, 30 Sep 2019 at 18:24, Dietmar Eggemann wrote:
>
> Hi Vincent,
>
> On 19/09/2019 09:33, Vincent Guittot wrote:
>
> these are just some comments & questions based on a code study. Haven't
> run any tests with it yet.
>
> [...]
>
> > The type of sched_group has been extended to better reflect
Hi Vincent,
On 19/09/2019 09:33, Vincent Guittot wrote:
these are just some comments & questions based on a code study. Haven't
run any tests with it yet.
[...]
> The type of sched_group has been extended to better reflect the type of
> imbalance. We now have :
> group_has_spare
>
On Mon, 30 Sep 2019 at 03:13, Rik van Riel wrote:
>
> On Thu, 2019-09-19 at 09:33 +0200, Vincent Guittot wrote:
> >
> > Also the load balance decisions have been consolidated in the 3
> > functions
> > below after removing the few bypasses and hacks of the current code:
> > - update_sd_pick_busies
On Thu, 2019-09-19 at 09:33 +0200, Vincent Guittot wrote:
>
> Also the load balance decisions have been consolidated in the 3
> functions
> below after removing the few bypasses and hacks of the current code:
> - update_sd_pick_busiest() select the busiest sched_group.
> - find_busiest_group() che
30 matches
Mail list logo