On Wed, 12 Feb 2014, Lorenzo Pieralisi wrote:
> On Wed, Feb 12, 2014 at 04:14:38PM +, Arjan van de Ven wrote:
> >
> > >> sched_cpu_cache_wiped(int llc)
> > >>
> > >> that would be very nice for this; the menuidle side knows this
> > >> for some cases and thus can just call it. This would be a
On Wed, Feb 12, 2014 at 04:14:38PM +, Arjan van de Ven wrote:
>
> >> sched_cpu_cache_wiped(int llc)
> >>
> >> that would be very nice for this; the menuidle side knows this
> >> for some cases and thus can just call it. This would be a very
> >> small and minimal change
> >
> > What do you mea
sched_cpu_cache_wiped(int llc)
that would be very nice for this; the menuidle side knows this
for some cases and thus can just call it. This would be a very
small and minimal change
What do you mean by "menuidle side knows this for some cases" ?
You mean you know that some C-state entries imp
On Mon, Feb 03, 2014 at 04:17:47PM +, Arjan van de Ven wrote:
[...]
> >> 1) A latency driven one
> >> 2) A performance impact on
> >>
> >> first one is pretty much the exit latency related time, sort of a
> >> "expected time to first instruction" (currently menuidle has the
> >> 99.999% worst
On Tue, Feb 11, 2014 at 09:12:02AM -0800, Arjan van de Ven wrote:
> On 2/11/2014 8:41 AM, Peter Zijlstra wrote:
> >On Mon, Feb 03, 2014 at 08:17:47AM -0800, Arjan van de Ven wrote:
> >>On 2/3/2014 6:56 AM, Peter Zijlstra wrote:
> >>if there's a simple api like
> >>
> >>sched_cpu_cache_wiped(int llc
On 2/11/2014 8:41 AM, Peter Zijlstra wrote:
On Mon, Feb 03, 2014 at 08:17:47AM -0800, Arjan van de Ven wrote:
On 2/3/2014 6:56 AM, Peter Zijlstra wrote:
if there's a simple api like
sched_cpu_cache_wiped(int llc)
that would be very nice for this; the menuidle side knows this
for some cases and
On Mon, Feb 03, 2014 at 08:17:47AM -0800, Arjan van de Ven wrote:
> On 2/3/2014 6:56 AM, Peter Zijlstra wrote:
> if there's a simple api like
>
> sched_cpu_cache_wiped(int llc)
>
> that would be very nice for this; the menuidle side knows this
> for some cases and thus can just call it. This woul
Yeah, so we could put the parameters back by measuring it in
user-space via a nice utility in tools/, and by matching it to
relevant hardware signatures (CPU type and cache sizes), plus doing
some defaults for when we don't have any signature... possibly based
on a fuzzy search to find the 'close
Yeah, so we could put the parameters back by measuring it in
user-space via a nice utility in tools/, and by matching it to
relevant hardware signatures (CPU type and cache sizes), plus doing
some defaults for when we don't have any signature... possibly based
on a fuzzy search to find the 'closes
* Peter Zijlstra wrote:
> [...]
>
> The reason Ingo took it out was that these measured numbers would
> slightly vary from boot to boot making it hard to compare
> performance numbers across boots.
>
> There's something to be said for either case I suppose.
Yeah, so we could put the paramete
On 2/3/2014 6:56 AM, Peter Zijlstra wrote:
Arjan, could you have a look at teaching your Thunderpants to wrap lines
at ~80 chars please?
I'll try but it suffers from Apple-disease
1) A latency driven one
2) A performance impact on
first one is pretty much the exit latency related time, sor
On Mon, 3 Feb 2014, Morten Rasmussen wrote:
> On Fri, Jan 31, 2014 at 06:19:26PM +, Nicolas Pitre wrote:
> > A cluster should map naturally to a scheduling domain. If we need to
> > wake up a CPU, it is quite obvious that we should prefer an idle CPU
> > from a scheduling domain which load
Arjan, could you have a look at teaching your Thunderpants to wrap lines
at ~80 chars please?
On Mon, Feb 03, 2014 at 06:38:11AM -0800, Arjan van de Ven wrote:
> On 2/3/2014 4:54 AM, Morten Rasmussen wrote:
>
> >
> >I'm therefore not convinced that idle state index is the right thing to
> >give
On 2/3/2014 4:54 AM, Morten Rasmussen wrote:
I'm therefore not convinced that idle state index is the right thing to
give the scheduler. Using a cost metric would be better in my
opinion.
I totally agree with this, and we may need two separate cost metrics
1) A latency driven one
2) A perfo
On Fri, Jan 31, 2014 at 06:19:26PM +, Nicolas Pitre wrote:
> Right now (on ARM at least but I imagine this is pretty universal), the
> biggest impact on information accuracy for a CPU depends on what the
> other CPUs are doing. The most obvious example is cluster power down.
> For a cluste
Hi Daniel,
On 01/31/2014 03:45 PM, Daniel Lezcano wrote:
> On 01/31/2014 09:45 AM, Preeti Murthy wrote:
>> Hi,
>>
>> On Thu, Jan 30, 2014 at 10:55 PM, Daniel Lezcano
>> wrote:
>>> On 01/30/2014 05:35 PM, Peter Zijlstra wrote:
On Thu, Jan 30, 2014 at 05:27:54PM +0100, Daniel Lezcano wrot
On Sat, 1 Feb 2014, Brown, Len wrote:
> > And your point is?
>
> It is a bad idea for an individual CPU to track the C-state
> of another CPU, which can change the cycle after it was checked.
Absolutely. And I'm far from advocating we do this either.
> We know it is a bad idea because we used
> And your point is?
It is a bad idea for an individual CPU to track the C-state
of another CPU, which can change the cycle after it was checked.
We know it is a bad idea because we used to do it,
until we realized code here can easily impact the
performance critical path.
In general, it is the O
On Sat, Feb 01, 2014 at 06:00:40AM +, Brown, Len wrote:
> > Right now (on ARM at least but I imagine this is pretty universal), the
> > biggest impact on information accuracy for a CPU depends on what the
> > other CPUs are doing. The most obvious example is cluster power down.
> > For a clust
On Sat, 1 Feb 2014, Brown, Len wrote:
> > Right now (on ARM at least but I imagine this is pretty universal), the
> > biggest impact on information accuracy for a CPU depends on what the
> > other CPUs are doing. The most obvious example is cluster power down.
> > For a cluster to be powered down
> Right now (on ARM at least but I imagine this is pretty universal), the
> biggest impact on information accuracy for a CPU depends on what the
> other CPUs are doing. The most obvious example is cluster power down.
> For a cluster to be powered down, all the CPUs sharing this cluster must
> also
On Fri, 31 Jan 2014, Arjan van de Ven wrote:
> On 1/31/2014 7:37 AM, Daniel Lezcano wrote:
> > On 01/31/2014 04:07 PM, Arjan van de Ven wrote:
> > > > > >
> > > > > > Hence I think this patch would make sense only with additional
> > > > > > information
> > > > > > like exit_latency or target_resi
on x86 I don't care; we don't actually change these dynamically much[1].
But if you have 1 or 2 things in mind to use,
I would suggest copying those 2 integers instead as we go, rather than
the index.
Saves refcounting/locking etc etc nightmare as well on the other
subsystems' datastructures..
..
On 01/31/2014 04:50 PM, Arjan van de Ven wrote:
On 1/31/2014 7:37 AM, Daniel Lezcano wrote:
On 01/31/2014 04:07 PM, Arjan van de Ven wrote:
Hence I think this patch would make sense only with additional
information
like exit_latency or target_residency is present for the scheduler.
The idle
st
On 1/31/2014 7:37 AM, Daniel Lezcano wrote:
On 01/31/2014 04:07 PM, Arjan van de Ven wrote:
Hence I think this patch would make sense only with additional
information
like exit_latency or target_residency is present for the scheduler.
The idle
state index alone will not be sufficient.
Alterna
On 01/31/2014 04:07 PM, Arjan van de Ven wrote:
Hence I think this patch would make sense only with additional
information
like exit_latency or target_residency is present for the scheduler.
The idle
state index alone will not be sufficient.
Alternatively, can we enforce sanity on the cpuidle
Hence I think this patch would make sense only with additional information
like exit_latency or target_residency is present for the scheduler. The idle
state index alone will not be sufficient.
Alternatively, can we enforce sanity on the cpuidle infrastructure to
make the index naturally ordere
On 31/01/14 14:04, Daniel Lezcano wrote:
> On 01/31/2014 10:39 AM, Preeti U Murthy wrote:
>> Hi Peter,
>>
>> On 01/31/2014 02:32 PM, Peter Zijlstra wrote:
>>> On Fri, Jan 31, 2014 at 02:15:47PM +0530, Preeti Murthy wrote:
>
> If the driver does its own random mapping that will break the gov
On 01/31/2014 10:39 AM, Preeti U Murthy wrote:
Hi Peter,
On 01/31/2014 02:32 PM, Peter Zijlstra wrote:
On Fri, Jan 31, 2014 at 02:15:47PM +0530, Preeti Murthy wrote:
If the driver does its own random mapping that will break the governor
logic. So yes, the states are ordered, the higher the in
On 01/30/2014 10:02 PM, Nicolas Pitre wrote:
On Thu, 30 Jan 2014, Lorenzo Pieralisi wrote:
On Thu, Jan 30, 2014 at 05:25:27PM +, Daniel Lezcano wrote:
On 01/30/2014 05:35 PM, Peter Zijlstra wrote:
On Thu, Jan 30, 2014 at 05:27:54PM +0100, Daniel Lezcano wrote:
IIRC, Alex Shi sent a patch
On Fri, Jan 31, 2014 at 03:09:49PM +0530, Preeti U Murthy wrote:
> > Alternatively, can we enforce sanity on the cpuidle infrastructure to
> > make the index naturally ordered? If not, please explain why :-)
>
> The commit id 71abbbf856a0e70 says that there are SOCs which could have
> their target
On 01/31/2014 09:45 AM, Preeti Murthy wrote:
Hi,
On Thu, Jan 30, 2014 at 10:55 PM, Daniel Lezcano
wrote:
On 01/30/2014 05:35 PM, Peter Zijlstra wrote:
On Thu, Jan 30, 2014 at 05:27:54PM +0100, Daniel Lezcano wrote:
struct cpuidle_state *state = &drv->states[rq->index];
And from the state,
On Thu, Jan 30, 2014 at 09:02:15PM +, Nicolas Pitre wrote:
> On Thu, 30 Jan 2014, Lorenzo Pieralisi wrote:
>
> > On Thu, Jan 30, 2014 at 05:25:27PM +, Daniel Lezcano wrote:
> > > On 01/30/2014 05:35 PM, Peter Zijlstra wrote:
> > > > On Thu, Jan 30, 2014 at 05:27:54PM +0100, Daniel Lezcano
On 30 January 2014 22:02, Nicolas Pitre wrote:
> On Thu, 30 Jan 2014, Lorenzo Pieralisi wrote:
>
>> On Thu, Jan 30, 2014 at 05:25:27PM +, Daniel Lezcano wrote:
>> > On 01/30/2014 05:35 PM, Peter Zijlstra wrote:
>> > > On Thu, Jan 30, 2014 at 05:27:54PM +0100, Daniel Lezcano wrote:
>> > >> IIRC
Hi Peter,
On 01/31/2014 02:32 PM, Peter Zijlstra wrote:
> On Fri, Jan 31, 2014 at 02:15:47PM +0530, Preeti Murthy wrote:
>>>
>>> If the driver does its own random mapping that will break the governor
>>> logic. So yes, the states are ordered, the higher the index is, the more you
>>> save power an
On Fri, Jan 31, 2014 at 02:15:47PM +0530, Preeti Murthy wrote:
> >
> > If the driver does its own random mapping that will break the governor
> > logic. So yes, the states are ordered, the higher the index is, the more you
> > save power and the higher the exit latency is.
>
> The above point hold
Hi,
On Thu, Jan 30, 2014 at 10:55 PM, Daniel Lezcano
wrote:
> On 01/30/2014 05:35 PM, Peter Zijlstra wrote:
>>
>> On Thu, Jan 30, 2014 at 05:27:54PM +0100, Daniel Lezcano wrote:
>>>
>>> struct cpuidle_state *state = &drv->states[rq->index];
>>>
>>> And from the state, we have the following inform
On Thu, 30 Jan 2014, Lorenzo Pieralisi wrote:
> On Thu, Jan 30, 2014 at 05:25:27PM +, Daniel Lezcano wrote:
> > On 01/30/2014 05:35 PM, Peter Zijlstra wrote:
> > > On Thu, Jan 30, 2014 at 05:27:54PM +0100, Daniel Lezcano wrote:
> > >> IIRC, Alex Shi sent a patchset to improve the choosing of t
On Thu, Jan 30, 2014 at 05:25:27PM +, Daniel Lezcano wrote:
> On 01/30/2014 05:35 PM, Peter Zijlstra wrote:
> > On Thu, Jan 30, 2014 at 05:27:54PM +0100, Daniel Lezcano wrote:
> >> struct cpuidle_state *state = &drv->states[rq->index];
> >>
> >> And from the state, we have the following informa
On 01/30/2014 05:35 PM, Peter Zijlstra wrote:
On Thu, Jan 30, 2014 at 05:27:54PM +0100, Daniel Lezcano wrote:
struct cpuidle_state *state = &drv->states[rq->index];
And from the state, we have the following informations:
struct cpuidle_state {
[ ... ]
unsigned intexit_la
On Thu, Jan 30, 2014 at 05:27:54PM +0100, Daniel Lezcano wrote:
> struct cpuidle_state *state = &drv->states[rq->index];
>
> And from the state, we have the following informations:
>
> struct cpuidle_state {
>
> [ ... ]
>
> unsigned intexit_latency; /* in US */
> int
On 01/30/2014 04:31 PM, Peter Zijlstra wrote:
On Thu, Jan 30, 2014 at 03:09:22PM +0100, Daniel Lezcano wrote:
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 90aef084..130debf 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -654,6 +654,9 @@ struct rq {
#endif
On Thu, Jan 30, 2014 at 03:09:22PM +0100, Daniel Lezcano wrote:
> diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
> index 90aef084..130debf 100644
> --- a/kernel/sched/sched.h
> +++ b/kernel/sched/sched.h
> @@ -654,6 +654,9 @@ struct rq {
> #endif
>
> struct sched_avg avg;
> +#ifd
43 matches
Mail list logo