Hi Vincent,
On 03/27/2017 12:04 AM, Vincent Guittot wrote:
> On 25 March 2017 at 02:14, Sai Gurrappadi wrote:
>> Hi Rafael,
>>
>> On 03/21/2017 04:08 PM, Rafael J. Wysocki wrote:
>>> From: Rafael J. Wysocki
>>>
>>> The way the sche
Hi Rafael,
On 03/21/2017 04:08 PM, Rafael J. Wysocki wrote:
> From: Rafael J. Wysocki
>
> The way the schedutil governor uses the PELT metric causes it to
> underestimate the CPU utilization in some cases.
>
> That can be easily demonstrated by running kernel compilation on
> a Sandy Bridge Int
On 03/23/2017 06:39 PM, Rafael J. Wysocki wrote:
> On Thu, Mar 23, 2017 at 8:26 PM, Sai Gurrappadi
> wrote:
>> Hi Rafael,
>
> Hi,
>
>> On 03/21/2017 04:08 PM, Rafael J. Wysocki wrote:
>>> From: Rafael J. Wysocki
>>
>>
>>
>>>
&g
On 03/23/2017 12:26 PM, Sai Gurrappadi wrote:
>
> Hm, sorry I am a bit confused perhaps you could help me understand the
> problem/solution better :)
>
> Say we have the this simple case of only a single periodic task running on
> one CPU, wouldn't the PELT u
Hi Rafael,
On 03/21/2017 04:08 PM, Rafael J. Wysocki wrote:
> From: Rafael J. Wysocki
>
> That has been attributed to CPU utilization metric updates on task
> migration that cause the total utilization value for the CPU to be
> reduced by the utilization of the migrated task. If that happens
On 06/30/2016 12:49 AM, Morten Rasmussen wrote:
> On Thu, Jun 23, 2016 at 02:20:48PM -0700, Sai Gurrappadi wrote:
>> Hi Morten,
>>
>> On 06/22/2016 10:03 AM, Morten Rasmussen wrote:
>>
>> [...]
>>
>>>
>>> +/*
>>> + * grou
Hi Morten,
On 06/22/2016 10:03 AM, Morten Rasmussen wrote:
[...]
>
> +/*
> + * group_smaller_cpu_capacity: Returns true if sched_group sg has smaller
> + * per-cpu capacity than sched_group ref.
> + */
> +static inline bool
> +group_smaller_cpu_capacity(struct sched_group *sg, struct sched_gro
On 03/24/2016 06:01 PM, Steve Muckle wrote:
> Hi Sai,
>
> On 03/24/2016 04:47 PM, Sai Gurrappadi wrote:
>>> @@ -2850,7 +2851,8 @@ static inline int update_cfs_rq_load_avg(u64 now,
>>> struct cfs_rq *cfs_rq)
>>> cfs_rq->load_last_update_time_c
Hi Steve,
On 03/21/2016 05:21 PM, Steve Muckle wrote:
> There's no reason to call the cpufreq hook if the root cfs_rq
> utilization has not been modified.
>
> Signed-off-by: Steve Muckle
> ---
> kernel/sched/fair.c | 10 ++
> 1 file changed, 6 insertions(+), 4 deletions(-)
>
> diff --g
On 03/21/2016 03:53 AM, Juri Lelli wrote:
> Hi Sai,
>
> On 18/03/16 10:49, Sai Gurrappadi wrote:
>> Hi Juri,
>>
>> On 03/18/2016 07:24 AM, Juri Lelli wrote:
>>
>>
>>
>>> +
>>> +
Hi Juri,
On 03/18/2016 07:24 AM, Juri Lelli wrote:
> +
> +==
> +2 - CPU capacity definition
> +==
> +
> +CPU capacity is a number that provides the scheduler information about CPUs
> +heterogeneity. Such heterogenei
On 08/13/2015 11:10 AM, Peter Zijlstra wrote:
> On Tue, Jul 07, 2015 at 07:24:11PM +0100, Morten Rasmussen wrote:
>> cpuidle associates all idle-states with each cpu while the energy model
>> associates them with the sched_group covering the cpus coordinating
>> entry to the idle-state. To look up
Hi Morten,
On 07/07/2015 11:24 AM, Morten Rasmussen wrote:
> In mainline find_idlest_group() selects the wake-up target group purely
> based on group load which leads to suboptimal choices in low load
> scenarios. An idle group with reduced capacity (due to RT tasks or
> different cpu type) isn't
Hi Morten,
On 07/07/2015 11:24 AM, Morten Rasmussen wrote:
> ---
> +static int energy_aware_wake_cpu(struct task_struct *p, int target)
> +{
> + struct sched_domain *sd;
> + struct sched_group *sg, *sg_target;
> + int target_max_cap = INT_MAX;
> + int target_cpu = task_cpu(p);
> +
On 05/12/2015 12:38 PM, Morten Rasmussen wrote:
> Task being dequeued for the last time (state == TASK_DEAD) are dequeued
> with the DEQUEUE_SLEEP flag which causes their load and utilization
> contributions to be added to the runqueue blocked load and utilization.
> Hence they will contain load or
On 05/12/2015 12:38 PM, Morten Rasmussen wrote:
> Test results for ARM TC2 (2xA15+3xA7) with cpufreq enabled:
>
> sysbench: Single task running for 3 seconds.
> rt-app [4]: mp3 playback use-case model
> rt-app [4]: 5 ~[6,13,19,25,31,38,44,50]% periodic (2ms) tasks
>
> Note: % is relative to the
On 02/04/2015 10:31 AM, Morten Rasmussen wrote:
> From: Dietmar Eggemann
> while (!list_empty(tasks)) {
> @@ -6121,6 +6121,20 @@ static int detach_tasks(struct lb_env *env)
> if (!can_migrate_task(p, env))
> goto next;
>
> + if (env->use_ea)
On 02/04/2015 10:31 AM, Morten Rasmussen wrote:
> +/*
> + * sched_group_energy(): Returns absolute energy consumption of cpus
> belonging
> + * to the sched_group including shared resources shared only by members of
> the
> + * group. Iterates over all cpus in the hierarchy below the sched_group
On 03/16/2015 07:47 AM, Morten Rasmussen wrote:
> On Fri, Mar 13, 2015 at 10:47:16PM +0000, Sai Gurrappadi wrote:
>> On 02/04/2015 10:31 AM, Morten Rasmussen wrote:
>>> +static int energy_aware_wake_cpu(struct task_struct *p)
>>> +{
>>> + struct sched_domain
On 02/04/2015 10:31 AM, Morten Rasmussen wrote:
> For energy-aware load-balancing decisions it is necessary to know the
> energy consumption estimates of groups of cpus. This patch introduces a
> basic function, sched_group_energy(), which estimates the energy
> consumption of the cpus in the group
On 02/04/2015 10:31 AM, Morten Rasmussen wrote:
> Let available compute capacity and estimated energy impact select
> wake-up target cpu when energy-aware scheduling is enabled.
> energy_aware_wake_cpu() attempts to find group of cpus with sufficient
> compute capacity to accommodate the task and f
21 matches
Mail list logo