On Fri, 18 Apr 2014, Daniel Lezcano wrote:
> On 04/18/2014 02:53 PM, Peter Zijlstra wrote:
> > I suppose so; its still a bit like we won't but we will :-)
> >
> > So we _will_ actually expose coupled C states through the topology bits,
> > that's good.
>
> Ah, ok. I think I understood where the c
On 04/18/2014 02:53 PM, Peter Zijlstra wrote:
On Fri, Apr 18, 2014 at 02:13:48PM +0200, Daniel Lezcano wrote:
On 04/18/2014 11:38 AM, Peter Zijlstra wrote:
On Thu, Apr 17, 2014 at 12:21:28PM -0400, Nicolas Pitre wrote:
CPU topology is needed to properly describe scheduling domains. Whether
we
On Fri, Apr 18, 2014 at 02:13:48PM +0200, Daniel Lezcano wrote:
> On 04/18/2014 11:38 AM, Peter Zijlstra wrote:
> >On Thu, Apr 17, 2014 at 12:21:28PM -0400, Nicolas Pitre wrote:
> >>CPU topology is needed to properly describe scheduling domains. Whether
> >>we balance across domains or pack using
On 04/18/2014 11:38 AM, Peter Zijlstra wrote:
On Thu, Apr 17, 2014 at 12:21:28PM -0400, Nicolas Pitre wrote:
CPU topology is needed to properly describe scheduling domains. Whether
we balance across domains or pack using as few domains as possible is a
separate issue. In other words, you shoul
On Thu, Apr 17, 2014 at 12:21:28PM -0400, Nicolas Pitre wrote:
> CPU topology is needed to properly describe scheduling domains. Whether
> we balance across domains or pack using as few domains as possible is a
> separate issue. In other words, you shouldn't have to care in this
> patch series
On 04/18/2014 10:09 AM, Ingo Molnar wrote:
* Daniel Lezcano wrote:
On 04/17/2014 04:47 PM, Peter Zijlstra wrote:
On Thu, Apr 17, 2014 at 03:53:32PM +0200, Daniel Lezcano wrote:
Concerning the policy, I would suggest to create an entry in
/proc/sys/kernel/sched_power, where a couple of value
* Daniel Lezcano wrote:
> On 04/17/2014 04:47 PM, Peter Zijlstra wrote:
> >On Thu, Apr 17, 2014 at 03:53:32PM +0200, Daniel Lezcano wrote:
> >>Concerning the policy, I would suggest to create an entry in
> >>/proc/sys/kernel/sched_power, where a couple of values could be performance
> >>- power
On Thu, 17 Apr 2014, Daniel Lezcano wrote:
> On 04/17/2014 05:53 PM, Nicolas Pitre wrote:
> > On Thu, 17 Apr 2014, Daniel Lezcano wrote:
> >
> > > Ok, refreshed the patchset but before sending it out I would to discuss
> > > about
> > > the rational of the changes and the policy, and change the pa
On 04/17/2014 05:53 PM, Nicolas Pitre wrote:
On Thu, 17 Apr 2014, Daniel Lezcano wrote:
Ok, refreshed the patchset but before sending it out I would to discuss about
the rational of the changes and the policy, and change the patchset
consequently.
What order to choose if the cpu is idle ?
Let
On Thu, 17 Apr 2014, Daniel Lezcano wrote:
> Ok, refreshed the patchset but before sending it out I would to discuss about
> the rational of the changes and the policy, and change the patchset
> consequently.
>
> What order to choose if the cpu is idle ?
>
> Let's assume all cpus are idle on a d
On 04/17/2014 04:47 PM, Peter Zijlstra wrote:
On Thu, Apr 17, 2014 at 03:53:32PM +0200, Daniel Lezcano wrote:
Concerning the policy, I would suggest to create an entry in
/proc/sys/kernel/sched_power, where a couple of values could be performance
- power saving (0 / 1).
Ingo wanted a sched_bal
On Thu, Apr 17, 2014 at 03:53:32PM +0200, Daniel Lezcano wrote:
> Concerning the policy, I would suggest to create an entry in
> /proc/sys/kernel/sched_power, where a couple of values could be performance
> - power saving (0 / 1).
Ingo wanted a sched_balance_policy file with 3 values:
"performan
On 04/02/2014 05:05 AM, Nicolas Pitre wrote:
On Fri, 28 Mar 2014, Daniel Lezcano wrote:
As we know in which idle state the cpu is, we can investigate the following:
1. when did the cpu entered the idle state ? the longer the cpu is idle, the
deeper it is idle
2. what exit latency is ? the grea
On Fri, Mar 28, 2014 at 01:29:56PM +0100, Daniel Lezcano wrote:
> @@ -4336,20 +4337,53 @@ static int
> find_idlest_cpu(struct sched_group *group, struct task_struct *p, int
> this_cpu)
> {
> unsigned long load, min_load = ULONG_MAX;
> - int idlest = -1;
> + unsigned int min_exit_la
On Friday, April 04, 2014 12:56:52 PM Nicolas Pitre wrote:
> On Fri, 4 Apr 2014, Rafael J. Wysocki wrote:
>
> > On Tuesday, April 01, 2014 11:05:49 PM Nicolas Pitre wrote:
> > > On Fri, 28 Mar 2014, Daniel Lezcano wrote:
> > >
> > > > As we know in which idle state the cpu is, we can investigate
On Fri, 4 Apr 2014, Rafael J. Wysocki wrote:
> On Tuesday, April 01, 2014 11:05:49 PM Nicolas Pitre wrote:
> > On Fri, 28 Mar 2014, Daniel Lezcano wrote:
> >
> > > As we know in which idle state the cpu is, we can investigate the
> > > following:
> > >
> > > 1. when did the cpu entered the idle
On Tuesday, April 01, 2014 11:05:49 PM Nicolas Pitre wrote:
> On Fri, 28 Mar 2014, Daniel Lezcano wrote:
>
> > As we know in which idle state the cpu is, we can investigate the following:
> >
> > 1. when did the cpu entered the idle state ? the longer the cpu is idle, the
> > deeper it is idle
>
On Fri, 28 Mar 2014, Daniel Lezcano wrote:
> As we know in which idle state the cpu is, we can investigate the following:
>
> 1. when did the cpu entered the idle state ? the longer the cpu is idle, the
> deeper it is idle
> 2. what exit latency is ? the greater the exit latency is, the deeper it
As we know in which idle state the cpu is, we can investigate the following:
1. when did the cpu entered the idle state ? the longer the cpu is idle, the
deeper it is idle
2. what exit latency is ? the greater the exit latency is, the deeper it is
With both information, when all cpus are idle, we
19 matches
Mail list logo