On Thu, 8 Nov 2018 at 12:35, Quentin Perret wrote:
>
> On Wednesday 07 Nov 2018 at 11:47:09 (+0100), Dietmar Eggemann wrote:
> > The important bit for EAS is that it only uses utilization in the
> > non-overutilized case. Here, utilization signals should look the same
> > between the two approache
On Wednesday 07 Nov 2018 at 11:47:09 (+0100), Dietmar Eggemann wrote:
> The important bit for EAS is that it only uses utilization in the
> non-overutilized case. Here, utilization signals should look the same
> between the two approaches, not considering tasks with long periods like the
> 39/80ms
On Wed, 7 Nov 2018 at 11:47, Dietmar Eggemann wrote:
>
> On 11/5/18 10:10 AM, Vincent Guittot wrote:
> > On Fri, 2 Nov 2018 at 16:36, Dietmar Eggemann
> > wrote:
> >>
> >> On 10/26/18 6:11 PM, Vincent Guittot wrote:
>
> [...]
>
> >> Thinking about this new approach on a big.LITTLE platform:
> >>
On 11/5/18 10:10 AM, Vincent Guittot wrote:
On Fri, 2 Nov 2018 at 16:36, Dietmar Eggemann wrote:
On 10/26/18 6:11 PM, Vincent Guittot wrote:
[...]
Thinking about this new approach on a big.LITTLE platform:
CPU Capacities big: 1024 LITTLE: 512, performance CPUfreq governor
A 50% (runtime/
On Mon, Nov 05, 2018 at 02:58:54PM +, Morten Rasmussen wrote:
> It has always been debatable what to do with utilization when there are
> no spare cycles.
>
> In Dietmar's example where two 25% tasks are put on a 512 (50%) capacity
> CPU we add just enough utilization to have no spare cycles
On Mon, 5 Nov 2018 at 15:59, Morten Rasmussen wrote:
>
> On Mon, Nov 05, 2018 at 10:10:34AM +0100, Vincent Guittot wrote:
> > On Fri, 2 Nov 2018 at 16:36, Dietmar Eggemann
> > wrote:
> > >
...
> > > >
> > > > In order to achieve this time scaling, a new clock_pelt is created per
> > > > rq.
> >
On Mon, Nov 05, 2018 at 10:10:34AM +0100, Vincent Guittot wrote:
> On Fri, 2 Nov 2018 at 16:36, Dietmar Eggemann
> wrote:
> >
> > On 10/26/18 6:11 PM, Vincent Guittot wrote:
> > > The current implementation of load tracking invariance scales the
> > > contribution with current frequency and uarch
On Fri, 2 Nov 2018 at 16:36, Dietmar Eggemann wrote:
>
> On 10/26/18 6:11 PM, Vincent Guittot wrote:
> > The current implementation of load tracking invariance scales the
> > contribution with current frequency and uarch performance (only for
> > utilization) of the CPU. One main result of this fo
On Thu, 1 Nov 2018 at 10:38, Dietmar Eggemann wrote:
>
> On 10/31/18 10:18 AM, Vincent Guittot wrote:
> > Hi Dietmar,
> >
> > On Wed, 31 Oct 2018 at 08:20, Dietmar Eggemann
> > wrote:
> >>
> >> On 10/26/18 6:11 PM, Vincent Guittot wrote:
> >>
> >> [...]
> >>
> >>>static int select_idle_sibli
On 10/26/18 6:11 PM, Vincent Guittot wrote:
The current implementation of load tracking invariance scales the
contribution with current frequency and uarch performance (only for
utilization) of the CPU. One main result of this formula is that the
figures are capped by current capacity of CPU. Ano
On 10/31/18 10:18 AM, Vincent Guittot wrote:
Hi Dietmar,
On Wed, 31 Oct 2018 at 08:20, Dietmar Eggemann wrote:
On 10/26/18 6:11 PM, Vincent Guittot wrote:
[...]
static int select_idle_sibling(struct task_struct *p, int prev_cpu, int cpu);
static unsigned long task_h_load(struct task_
Hi Dietmar,
On Wed, 31 Oct 2018 at 08:20, Dietmar Eggemann wrote:
>
> On 10/26/18 6:11 PM, Vincent Guittot wrote:
>
> [...]
>
> > static int select_idle_sibling(struct task_struct *p, int prev_cpu, int
> > cpu);
> > static unsigned long task_h_load(struct task_struct *p);
> > @@ -764,7 +763,
On 10/26/18 6:11 PM, Vincent Guittot wrote:
[...]
static int select_idle_sibling(struct task_struct *p, int prev_cpu, int cpu);
static unsigned long task_h_load(struct task_struct *p);
@@ -764,7 +763,7 @@ void post_init_entity_util_avg(struct sched_entity *se)
* suc
Hi Pavan,
On Tue, 30 Oct 2018 at 10:19, Pavan Kondeti wrote:
>
> Hi Vincent,
>
> On Fri, Oct 26, 2018 at 06:11:43PM +0200, Vincent Guittot wrote:
> > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> > index 6806c27..7a69673 100644
> > --- a/kernel/sched/fair.c
> > +++ b/kernel/sched/fair.
Hi Vincent,
On Fri, Oct 26, 2018 at 06:11:43PM +0200, Vincent Guittot wrote:
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 6806c27..7a69673 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -674,9 +674,8 @@ static u64 sched_vslice(struct cfs_rq *cfs_rq, struct
The current implementation of load tracking invariance scales the
contribution with current frequency and uarch performance (only for
utilization) of the CPU. One main result of this formula is that the
figures are capped by current capacity of CPU. Another one is that the
load_avg is not invariant
16 matches
Mail list logo