On 08/22/2012 05:10 PM, Ingo Molnar wrote:
>
> * Matthew Garrett wrote:
>
>> [...]
>>
>> Our power consumption is worse than under other operating
>> systems is almost entirely because only one of our three GPU
>> drivers implements any kind of useful power management. [...]
>
> ... and
On 08/22/2012 05:10 PM, Ingo Molnar wrote:
* Matthew Garrett mj...@srcf.ucam.org wrote:
[...]
Our power consumption is worse than under other operating
systems is almost entirely because only one of our three GPU
drivers implements any kind of useful power management. [...]
... and
On 8/22/2012 6:21 AM, Matthew Garrett wrote:
> On Wed, Aug 22, 2012 at 06:02:48AM -0700, Arjan van de Ven wrote:
>> On 8/21/2012 10:41 PM, Mike Galbraith wrote:
>>> For my dinky dual core laptop, I suspect you're right, but for a more
>>> powerful laptop, I'd expect spread/don't to be noticeable.
On Wed, Aug 22, 2012 at 06:02:48AM -0700, Arjan van de Ven wrote:
> On 8/21/2012 10:41 PM, Mike Galbraith wrote:
> > For my dinky dual core laptop, I suspect you're right, but for a more
> > powerful laptop, I'd expect spread/don't to be noticeable.
>
> yeah if you don't spread, you will waste
On Wed, 2012-08-22 at 06:02 -0700, Arjan van de Ven wrote:
> On 8/21/2012 10:41 PM, Mike Galbraith wrote:
> > On Tue, 2012-08-21 at 17:02 +0100, Alan Cox wrote:
> >
> >> I'd like to see actual numbers and evidence on a wide range of workloads
> >> the spread/don't spread thing is even measurable
On 8/21/2012 10:41 PM, Mike Galbraith wrote:
> On Tue, 2012-08-21 at 17:02 +0100, Alan Cox wrote:
>
>> I'd like to see actual numbers and evidence on a wide range of workloads
>> the spread/don't spread thing is even measurable given that you've also
>> got to factor in effects like completing
> It can be more than an irrelevance if the CPU is saturated - say
> a game running on a mobile device very commonly saturates the
> CPU. A third of the energy is spent in the CPU, sometimes more.
If the CPU is saturated you already lost. What you going to do - the CPU
is saturated - slow it
On Wed, Aug 22, 2012 at 11:10:13AM +0200, Ingo Molnar wrote:
>
> * Matthew Garrett wrote:
>
> > [...]
> >
> > Our power consumption is worse than under other operating
> > systems is almost entirely because only one of our three GPU
> > drivers implements any kind of useful power management.
* Alan Cox wrote:
> > With deep enough C states it's rather relevant whether we
> > continue to burn +50W for a couple of more milliseconds or
> > not, and whether we have the right information from the
> > scheduler and timer subsystem about how long the next idle
> > period is expected to
> With deep enough C states it's rather relevant whether we
> continue to burn +50W for a couple of more milliseconds or not,
> and whether we have the right information from the scheduler and
> timer subsystem about how long the next idle period is expected
> to be and how bursty a given task
* Matthew Garrett wrote:
> [...]
>
> Our power consumption is worse than under other operating
> systems is almost entirely because only one of our three GPU
> drivers implements any kind of useful power management. [...]
... and because our CPU frequency and C state selection logic is
* Alan Cox wrote:
> > Why? Good scheduling is useful even in isolation.
>
> For power - I suspect it's damn near irrelevant except on a
> big big machine.
With deep enough C states it's rather relevant whether we
continue to burn +50W for a couple of more milliseconds or not,
and whether
* Alan Cox a...@lxorguk.ukuu.org.uk wrote:
Why? Good scheduling is useful even in isolation.
For power - I suspect it's damn near irrelevant except on a
big big machine.
With deep enough C states it's rather relevant whether we
continue to burn +50W for a couple of more milliseconds or
* Matthew Garrett mj...@srcf.ucam.org wrote:
[...]
Our power consumption is worse than under other operating
systems is almost entirely because only one of our three GPU
drivers implements any kind of useful power management. [...]
... and because our CPU frequency and C state selection
With deep enough C states it's rather relevant whether we
continue to burn +50W for a couple of more milliseconds or not,
and whether we have the right information from the scheduler and
timer subsystem about how long the next idle period is expected
to be and how bursty a given task is.
* Alan Cox a...@lxorguk.ukuu.org.uk wrote:
With deep enough C states it's rather relevant whether we
continue to burn +50W for a couple of more milliseconds or
not, and whether we have the right information from the
scheduler and timer subsystem about how long the next idle
period
On Wed, Aug 22, 2012 at 11:10:13AM +0200, Ingo Molnar wrote:
* Matthew Garrett mj...@srcf.ucam.org wrote:
[...]
Our power consumption is worse than under other operating
systems is almost entirely because only one of our three GPU
drivers implements any kind of useful power
It can be more than an irrelevance if the CPU is saturated - say
a game running on a mobile device very commonly saturates the
CPU. A third of the energy is spent in the CPU, sometimes more.
If the CPU is saturated you already lost. What you going to do - the CPU
is saturated - slow it down,
On 8/21/2012 10:41 PM, Mike Galbraith wrote:
On Tue, 2012-08-21 at 17:02 +0100, Alan Cox wrote:
I'd like to see actual numbers and evidence on a wide range of workloads
the spread/don't spread thing is even measurable given that you've also
got to factor in effects like completing faster and
On Wed, 2012-08-22 at 06:02 -0700, Arjan van de Ven wrote:
On 8/21/2012 10:41 PM, Mike Galbraith wrote:
On Tue, 2012-08-21 at 17:02 +0100, Alan Cox wrote:
I'd like to see actual numbers and evidence on a wide range of workloads
the spread/don't spread thing is even measurable given that
On Wed, Aug 22, 2012 at 06:02:48AM -0700, Arjan van de Ven wrote:
On 8/21/2012 10:41 PM, Mike Galbraith wrote:
For my dinky dual core laptop, I suspect you're right, but for a more
powerful laptop, I'd expect spread/don't to be noticeable.
yeah if you don't spread, you will waste some
On 8/22/2012 6:21 AM, Matthew Garrett wrote:
On Wed, Aug 22, 2012 at 06:02:48AM -0700, Arjan van de Ven wrote:
On 8/21/2012 10:41 PM, Mike Galbraith wrote:
For my dinky dual core laptop, I suspect you're right, but for a more
powerful laptop, I'd expect spread/don't to be noticeable.
yeah if
On Tue, 2012-08-21 at 17:02 +0100, Alan Cox wrote:
> I'd like to see actual numbers and evidence on a wide range of workloads
> the spread/don't spread thing is even measurable given that you've also
> got to factor in effects like completing faster and turning everything
> off. I'd *really* like
> Why? Good scheduling is useful even in isolation.
For power - I suspect it's damn near irrelevant except on a big big
machine.
Unless you've sorted out your SATA, fixed your phy handling, optimised
your desktop for wakeups and worked down the big wakeup causes one by one
it's turd polishing.
On Tue, Aug 21, 2012 at 08:23:46PM +0200, Ingo Molnar wrote:
> * Matthew Garrett wrote:
> > The scheduler is unaware of whether I care about a process
> > finishing quickly or whether I care about it consuming less
> > power.
>
> You are posing them as if the two were mutually exclusive, while
* Matthew Garrett wrote:
> On Tue, Aug 21, 2012 at 05:59:08PM +0200, Ingo Molnar wrote:
> > * Matthew Garrett wrote:
> > > The scheduler's behaviour is going to have a minimal impact on
> > > power consumption on laptops. Other things are much more
> > > important - backlight level, ASPM
On Tue, Aug 21, 2012 at 05:59:08PM +0200, Ingo Molnar wrote:
> * Matthew Garrett wrote:
> > The scheduler's behaviour is going to have a minimal impact on
> > power consumption on laptops. Other things are much more
> > important - backlight level, ASPM state, that kind of thing.
> > So why
* Matthew Garrett wrote:
> On Tue, Aug 21, 2012 at 05:19:10PM +0200, Ingo Molnar wrote:
> > * Matthew Garrett wrote:
> > > [...] AC/battery is just not an important power management
> > > policy input when compared to various other things.
> >
> > Such as?
>
> The scheduler's behaviour is
> > That's a fundamentally uninteresting thing for the kernel to
> > know about. [...]
>
> I disagree.
The kernel has no idea of the power architecture leading up to the plug
socket. The kernel has no idea of the policy concerns of the user.
> > [...] AC/battery is just not an important power
On Tue, Aug 21, 2012 at 05:19:10PM +0200, Ingo Molnar wrote:
> * Matthew Garrett wrote:
> > [...] AC/battery is just not an important power management
> > policy input when compared to various other things.
>
> Such as?
The scheduler's behaviour is going to have a minimal impact on power
>>> A modern kernel better know what state the system is in: on
>>> battery or on AC power.
>>
>> That's a fundamentally uninteresting thing for the kernel to
>> know about. [...]
>
> I disagree.
and I'll agree with Matthew and disagree with you ;-)
>
>> [...] AC/battery is just not an
* Matthew Garrett wrote:
> On Tue, Aug 21, 2012 at 11:42:04AM +0200, Ingo Molnar wrote:
> > * Matthew Garrett wrote:
> > > [...] Putting this kind of policy in the kernel is an awful
> > > idea. [...]
> >
> > A modern kernel better know what state the system is in: on
> > battery or on AC
On Tue, Aug 21, 2012 at 11:42:04AM +0200, Ingo Molnar wrote:
> * Matthew Garrett wrote:
> > [...] Putting this kind of policy in the kernel is an awful
> > idea. [...]
>
> A modern kernel better know what state the system is in: on
> battery or on AC power.
That's a fundamentally
On 21 August 2012 02:58, Alex Shi wrote:
> On 08/20/2012 11:36 PM, Vincent Guittot wrote:
>
>>> > What you want it to keep track of a per-cpu utilization level (inverse
>>> > of idle-time) and using PJTs per-task runnable avg see if placing the
>>> > new task on will exceed the utilization limit.
* Matthew Garrett wrote:
> On Mon, Aug 20, 2012 at 10:06:06AM +0200, Ingo Molnar wrote:
>
> > If the answer is 'yes' then there's clear cases where the kernel
> > (should) automatically know the events where we switch from
> > balancing for performance to balancing for power:
>
> No. We
On Tue, 2012-08-21 at 17:02 +0100, Alan Cox wrote:
I'd like to see actual numbers and evidence on a wide range of workloads
the spread/don't spread thing is even measurable given that you've also
got to factor in effects like completing faster and turning everything
off. I'd *really* like to
* Matthew Garrett mj...@srcf.ucam.org wrote:
On Mon, Aug 20, 2012 at 10:06:06AM +0200, Ingo Molnar wrote:
If the answer is 'yes' then there's clear cases where the kernel
(should) automatically know the events where we switch from
balancing for performance to balancing for power:
On 21 August 2012 02:58, Alex Shi alex@intel.com wrote:
On 08/20/2012 11:36 PM, Vincent Guittot wrote:
What you want it to keep track of a per-cpu utilization level (inverse
of idle-time) and using PJTs per-task runnable avg see if placing the
new task on will exceed the utilization
On Tue, Aug 21, 2012 at 11:42:04AM +0200, Ingo Molnar wrote:
* Matthew Garrett mj...@srcf.ucam.org wrote:
[...] Putting this kind of policy in the kernel is an awful
idea. [...]
A modern kernel better know what state the system is in: on
battery or on AC power.
That's a fundamentally
* Matthew Garrett mj...@srcf.ucam.org wrote:
On Tue, Aug 21, 2012 at 11:42:04AM +0200, Ingo Molnar wrote:
* Matthew Garrett mj...@srcf.ucam.org wrote:
[...] Putting this kind of policy in the kernel is an awful
idea. [...]
A modern kernel better know what state the system is in:
A modern kernel better know what state the system is in: on
battery or on AC power.
That's a fundamentally uninteresting thing for the kernel to
know about. [...]
I disagree.
and I'll agree with Matthew and disagree with you ;-)
[...] AC/battery is just not an important power
On Tue, Aug 21, 2012 at 05:19:10PM +0200, Ingo Molnar wrote:
* Matthew Garrett mj...@srcf.ucam.org wrote:
[...] AC/battery is just not an important power management
policy input when compared to various other things.
Such as?
The scheduler's behaviour is going to have a minimal impact on
That's a fundamentally uninteresting thing for the kernel to
know about. [...]
I disagree.
The kernel has no idea of the power architecture leading up to the plug
socket. The kernel has no idea of the policy concerns of the user.
[...] AC/battery is just not an important power
* Matthew Garrett m...@redhat.com wrote:
On Tue, Aug 21, 2012 at 05:19:10PM +0200, Ingo Molnar wrote:
* Matthew Garrett mj...@srcf.ucam.org wrote:
[...] AC/battery is just not an important power management
policy input when compared to various other things.
Such as?
The
On Tue, Aug 21, 2012 at 05:59:08PM +0200, Ingo Molnar wrote:
* Matthew Garrett m...@redhat.com wrote:
The scheduler's behaviour is going to have a minimal impact on
power consumption on laptops. Other things are much more
important - backlight level, ASPM state, that kind of thing.
So
* Matthew Garrett mj...@srcf.ucam.org wrote:
On Tue, Aug 21, 2012 at 05:59:08PM +0200, Ingo Molnar wrote:
* Matthew Garrett m...@redhat.com wrote:
The scheduler's behaviour is going to have a minimal impact on
power consumption on laptops. Other things are much more
important -
On Tue, Aug 21, 2012 at 08:23:46PM +0200, Ingo Molnar wrote:
* Matthew Garrett mj...@srcf.ucam.org wrote:
The scheduler is unaware of whether I care about a process
finishing quickly or whether I care about it consuming less
power.
You are posing them as if the two were mutually
Why? Good scheduling is useful even in isolation.
For power - I suspect it's damn near irrelevant except on a big big
machine.
Unless you've sorted out your SATA, fixed your phy handling, optimised
your desktop for wakeups and worked down the big wakeup causes one by one
it's turd polishing.
On 08/20/2012 11:47 PM, Vincent Guittot wrote:
> On 16 August 2012 07:03, Alex Shi wrote:
>> On 08/16/2012 12:19 AM, Matthew Garrett wrote:
>>
>>> On Mon, Aug 13, 2012 at 08:21:00PM +0800, Alex Shi wrote:
>>>
power aware scheduling), this proposal will adopt the
sched_balance_policy
On 08/20/2012 11:36 PM, Vincent Guittot wrote:
>> > What you want it to keep track of a per-cpu utilization level (inverse
>> > of idle-time) and using PJTs per-task runnable avg see if placing the
>> > new task on will exceed the utilization limit.
>> >
>> > I think some of the Linaro people
On Mon, 20 Aug 2012, Matthew Garrett wrote:
> On Mon, Aug 20, 2012 at 03:47:54PM +, Christoph Lameter wrote:
>
> > So please make sure that there are obvious and easy ways to switch this
> > stuff off or provide "low latency" know that keeps the system from
> > assuming that idle time means
On Mon, Aug 20, 2012 at 10:06:06AM +0200, Ingo Molnar wrote:
> If the answer is 'yes' then there's clear cases where the kernel
> (should) automatically know the events where we switch from
> balancing for performance to balancing for power:
No. We can't identify all of these cases and we
On 17 August 2012 10:43, Paul Turner wrote:
> On Wed, Aug 15, 2012 at 4:05 AM, Peter Zijlstra
> wrote:
>> On Mon, 2012-08-13 at 20:21 +0800, Alex Shi wrote:
>>> Since there is no power saving consideration in scheduler CFS, I has a
>>> very rough idea for enabling a new power saving schema in
On Mon, Aug 20, 2012 at 03:47:54PM +, Christoph Lameter wrote:
> So please make sure that there are obvious and easy ways to switch this
> stuff off or provide "low latency" know that keeps the system from
> assuming that idle time means that full performance is not needed.
That seems like
One issue that is often forgotten is that there are users who want lowest
latency and not highest performance. Our systems sit idle for most of the
time but when a specific event occurs (typically a packet is received)
they must react in the fastest way possible.
On every new generation of
On 16 August 2012 07:03, Alex Shi wrote:
> On 08/16/2012 12:19 AM, Matthew Garrett wrote:
>
>> On Mon, Aug 13, 2012 at 08:21:00PM +0800, Alex Shi wrote:
>>
>>> power aware scheduling), this proposal will adopt the
>>> sched_balance_policy concept and use 2 kind of policy: performance, power.
>>
On 15 August 2012 13:05, Peter Zijlstra wrote:
> On Mon, 2012-08-13 at 20:21 +0800, Alex Shi wrote:
>> Since there is no power saving consideration in scheduler CFS, I has a
>> very rough idea for enabling a new power saving schema in CFS.
>
> Adding Thomas, he always delights poking holes in
On 8/20/2012 1:06 AM, Ingo Molnar wrote:
>
>
> There's also cases where the kernel has insufficient information
> from the hardware and from the admin about the preferred
> characteristics/policy of the system - a tweakable fallback knob
> might be provided for that sad case.
>
> The point
On Mon, 2012-08-20 at 10:06 +0200, Ingo Molnar wrote:
> > > I was really more thinking of something useful for the
> > > laptops out there, when they pull the power cord it makes
> > > sense to try and keep CPUs asleep until the one that's awake
> > > is saturated.
>
> s/CPU/core ?
I was
* Arjan van de Ven wrote:
> On 8/15/2012 8:04 AM, Peter Zijlstra wrote:
>
> > This all sounds far too complicated.. we're talking about
> > simple spreading and packing balancers without deep arch
> > knowledge and knobs, we couldn't possibly evaluate anything
> > like that.
> >
> > I was
* Arjan van de Ven ar...@linux.intel.com wrote:
On 8/15/2012 8:04 AM, Peter Zijlstra wrote:
This all sounds far too complicated.. we're talking about
simple spreading and packing balancers without deep arch
knowledge and knobs, we couldn't possibly evaluate anything
like that.
On Mon, 2012-08-20 at 10:06 +0200, Ingo Molnar wrote:
I was really more thinking of something useful for the
laptops out there, when they pull the power cord it makes
sense to try and keep CPUs asleep until the one that's awake
is saturated.
s/CPU/core ?
I was thinking logical
On 8/20/2012 1:06 AM, Ingo Molnar wrote:
There's also cases where the kernel has insufficient information
from the hardware and from the admin about the preferred
characteristics/policy of the system - a tweakable fallback knob
might be provided for that sad case.
The point is, that
On 15 August 2012 13:05, Peter Zijlstra a.p.zijls...@chello.nl wrote:
On Mon, 2012-08-13 at 20:21 +0800, Alex Shi wrote:
Since there is no power saving consideration in scheduler CFS, I has a
very rough idea for enabling a new power saving schema in CFS.
Adding Thomas, he always delights
On 16 August 2012 07:03, Alex Shi alex@intel.com wrote:
On 08/16/2012 12:19 AM, Matthew Garrett wrote:
On Mon, Aug 13, 2012 at 08:21:00PM +0800, Alex Shi wrote:
power aware scheduling), this proposal will adopt the
sched_balance_policy concept and use 2 kind of policy: performance,
One issue that is often forgotten is that there are users who want lowest
latency and not highest performance. Our systems sit idle for most of the
time but when a specific event occurs (typically a packet is received)
they must react in the fastest way possible.
On every new generation of
On Mon, Aug 20, 2012 at 03:47:54PM +, Christoph Lameter wrote:
So please make sure that there are obvious and easy ways to switch this
stuff off or provide low latency know that keeps the system from
assuming that idle time means that full performance is not needed.
That seems like an
On 17 August 2012 10:43, Paul Turner p...@google.com wrote:
On Wed, Aug 15, 2012 at 4:05 AM, Peter Zijlstra a.p.zijls...@chello.nl
wrote:
On Mon, 2012-08-13 at 20:21 +0800, Alex Shi wrote:
Since there is no power saving consideration in scheduler CFS, I has a
very rough idea for enabling a
On Mon, Aug 20, 2012 at 10:06:06AM +0200, Ingo Molnar wrote:
If the answer is 'yes' then there's clear cases where the kernel
(should) automatically know the events where we switch from
balancing for performance to balancing for power:
No. We can't identify all of these cases and we can't
On Mon, 20 Aug 2012, Matthew Garrett wrote:
On Mon, Aug 20, 2012 at 03:47:54PM +, Christoph Lameter wrote:
So please make sure that there are obvious and easy ways to switch this
stuff off or provide low latency know that keeps the system from
assuming that idle time means that full
On 08/20/2012 11:36 PM, Vincent Guittot wrote:
What you want it to keep track of a per-cpu utilization level (inverse
of idle-time) and using PJTs per-task runnable avg see if placing the
new task on will exceed the utilization limit.
I think some of the Linaro people actually played
On 08/20/2012 11:47 PM, Vincent Guittot wrote:
On 16 August 2012 07:03, Alex Shi alex@intel.com wrote:
On 08/16/2012 12:19 AM, Matthew Garrett wrote:
On Mon, Aug 13, 2012 at 08:21:00PM +0800, Alex Shi wrote:
power aware scheduling), this proposal will adopt the
sched_balance_policy
Hi all,
I can probably add some bits to the discussion, after all I'm preparing
a talk for Plumbers that is strictly related :-). My points are not CFS
related (so feel free to ignore me), but they would probably be
interesting if we talk about power aware scheduling in Linux in general.
On
Hi all,
I can probably add some bits to the discussion, after all I'm preparing
a talk for Plumbers that is strictly related :-). My points are not CFS
related (so feel free to ignore me), but they would probably be
interesting if we talk about power aware scheduling in Linux in general.
On
On 8/18/2012 7:33 AM, Luming Yu wrote:
> saving mode. But obviously, we need to spread as much as possible
> across all cores in another socket(to race to idle). So from the
> example above, we see a threshold that we need to reference before
> selecting one from two complete different policy:
On Sat, Aug 18, 2012 at 4:16 AM, Chris Friesen
wrote:
> On 08/17/2012 01:50 PM, Matthew Garrett wrote:
>>
>> On Fri, Aug 17, 2012 at 01:45:09PM -0600, Chris Friesen wrote:
>>>
>>> On 08/17/2012 12:47 PM, Matthew Garrett wrote:
>>
>>
>>> The datasheet for the Xeon E5 (my variant at least) says it
On Sat, Aug 18, 2012 at 4:16 AM, Chris Friesen
chris.frie...@genband.com wrote:
On 08/17/2012 01:50 PM, Matthew Garrett wrote:
On Fri, Aug 17, 2012 at 01:45:09PM -0600, Chris Friesen wrote:
On 08/17/2012 12:47 PM, Matthew Garrett wrote:
The datasheet for the Xeon E5 (my variant at least)
On 8/18/2012 7:33 AM, Luming Yu wrote:
saving mode. But obviously, we need to spread as much as possible
across all cores in another socket(to race to idle). So from the
example above, we see a threshold that we need to reference before
selecting one from two complete different policy: spread
On 08/17/2012 01:50 PM, Matthew Garrett wrote:
On Fri, Aug 17, 2012 at 01:45:09PM -0600, Chris Friesen wrote:
On 08/17/2012 12:47 PM, Matthew Garrett wrote:
The datasheet for the Xeon E5 (my variant at least) says it doesn't
do C7 so never powers down the LLC. However, as you said earlier
On Fri, Aug 17, 2012 at 01:45:09PM -0600, Chris Friesen wrote:
> On 08/17/2012 12:47 PM, Matthew Garrett wrote:
> The datasheet for the Xeon E5 (my variant at least) says it doesn't
> do C7 so never powers down the LLC. However, as you said earlier
> once you can put the socket into C6 which
On 08/17/2012 12:47 PM, Matthew Garrett wrote:
On Fri, Aug 17, 2012 at 11:44:03AM -0700, Arjan van de Ven wrote:
On 8/17/2012 11:41 AM, Matthew Garrett wrote:
On Thu, Aug 16, 2012 at 07:01:25AM -0700, Arjan van de Ven wrote:
this is ... a dubiously general statement.
for good power, at least
On Fri, Aug 17, 2012 at 11:44:03AM -0700, Arjan van de Ven wrote:
> On 8/17/2012 11:41 AM, Matthew Garrett wrote:
> > On Thu, Aug 16, 2012 at 07:01:25AM -0700, Arjan van de Ven wrote:
> >> this is ... a dubiously general statement.
> >>
> >> for good power, at least on Intel cpus, you want to
On 8/17/2012 11:41 AM, Matthew Garrett wrote:
> On Thu, Aug 16, 2012 at 07:01:25AM -0700, Arjan van de Ven wrote:
>>> *Power policy*:
>>>
>>> So how is power policy different? As Peter says,'pack more than spread
>>> more'.
>>
>> this is ... a dubiously general statement.
>>
>> for good power, at
On Thu, Aug 16, 2012 at 07:01:25AM -0700, Arjan van de Ven wrote:
> > *Power policy*:
> >
> > So how is power policy different? As Peter says,'pack more than spread
> > more'.
>
> this is ... a dubiously general statement.
>
> for good power, at least on Intel cpus, you want to spread.
On Wed, Aug 15, 2012 at 11:02 AM, Arjan van de Ven
wrote:
> On 8/15/2012 9:34 AM, Matthew Garrett wrote:
>> On Wed, Aug 15, 2012 at 01:05:38PM +0200, Peter Zijlstra wrote:
>>> On Mon, 2012-08-13 at 20:21 +0800, Alex Shi wrote:
It bases on the following assumption:
1, If there are many
On Wed, Aug 15, 2012 at 4:05 AM, Peter Zijlstra wrote:
> On Mon, 2012-08-13 at 20:21 +0800, Alex Shi wrote:
>> Since there is no power saving consideration in scheduler CFS, I has a
>> very rough idea for enabling a new power saving schema in CFS.
>
> Adding Thomas, he always delights poking
On Wed, Aug 15, 2012 at 4:05 AM, Peter Zijlstra a.p.zijls...@chello.nl wrote:
On Mon, 2012-08-13 at 20:21 +0800, Alex Shi wrote:
Since there is no power saving consideration in scheduler CFS, I has a
very rough idea for enabling a new power saving schema in CFS.
Adding Thomas, he always
On Wed, Aug 15, 2012 at 11:02 AM, Arjan van de Ven
ar...@linux.intel.com wrote:
On 8/15/2012 9:34 AM, Matthew Garrett wrote:
On Wed, Aug 15, 2012 at 01:05:38PM +0200, Peter Zijlstra wrote:
On Mon, 2012-08-13 at 20:21 +0800, Alex Shi wrote:
It bases on the following assumption:
1, If there are
On Thu, Aug 16, 2012 at 07:01:25AM -0700, Arjan van de Ven wrote:
*Power policy*:
So how is power policy different? As Peter says,'pack more than spread
more'.
this is ... a dubiously general statement.
for good power, at least on Intel cpus, you want to spread. Parallelism is
On 8/17/2012 11:41 AM, Matthew Garrett wrote:
On Thu, Aug 16, 2012 at 07:01:25AM -0700, Arjan van de Ven wrote:
*Power policy*:
So how is power policy different? As Peter says,'pack more than spread
more'.
this is ... a dubiously general statement.
for good power, at least on Intel cpus,
On Fri, Aug 17, 2012 at 11:44:03AM -0700, Arjan van de Ven wrote:
On 8/17/2012 11:41 AM, Matthew Garrett wrote:
On Thu, Aug 16, 2012 at 07:01:25AM -0700, Arjan van de Ven wrote:
this is ... a dubiously general statement.
for good power, at least on Intel cpus, you want to spread.
On 08/17/2012 12:47 PM, Matthew Garrett wrote:
On Fri, Aug 17, 2012 at 11:44:03AM -0700, Arjan van de Ven wrote:
On 8/17/2012 11:41 AM, Matthew Garrett wrote:
On Thu, Aug 16, 2012 at 07:01:25AM -0700, Arjan van de Ven wrote:
this is ... a dubiously general statement.
for good power, at least
On Fri, Aug 17, 2012 at 01:45:09PM -0600, Chris Friesen wrote:
On 08/17/2012 12:47 PM, Matthew Garrett wrote:
The datasheet for the Xeon E5 (my variant at least) says it doesn't
do C7 so never powers down the LLC. However, as you said earlier
once you can put the socket into C6 which saves
On 08/17/2012 01:50 PM, Matthew Garrett wrote:
On Fri, Aug 17, 2012 at 01:45:09PM -0600, Chris Friesen wrote:
On 08/17/2012 12:47 PM, Matthew Garrett wrote:
The datasheet for the Xeon E5 (my variant at least) says it doesn't
do C7 so never powers down the LLC. However, as you said earlier
On 08/16/2012 10:01 PM, Arjan van de Ven wrote:
>> *Power policy*:
>>
>> So how is power policy different? As Peter says,'pack more than spread
>> more'.
>
> this is ... a dubiously general statement.
>
> for good power, at least on Intel cpus, you want to spread. Parallelism is
> efficient.
>
On 8/16/2012 11:45 AM, Rik van Riel wrote:
>
> The c-state governor can call the scheduler code before
> putting a CPU to sleep, to indicate (1) the wakeup latency
> of the CPU, and (2) whether TLB and/or cache get invalidated.
I don't think (2) is useful really; that basically always happens
On 08/16/2012 10:01 AM, Arjan van de Ven wrote:
*Power policy*:
So how is power policy different? As Peter says,'pack more than spread
more'.
this is ... a dubiously general statement.
for good power, at least on Intel cpus, you want to spread. Parallelism is
efficient.
the only thing you
Hi all,
On Wed, Aug 15, 2012 at 12:05:38PM +0100, Peter Zijlstra wrote:
> >
> > sub proposal:
> > 1, If it's possible to balance task on idlest cpu not appointed 'balance
> > cpu'. If so, it may can reduce one more time balancing.
> > The idlest cpu can prefer the new idle cpu; and is the least
> *Power policy*:
>
> So how is power policy different? As Peter says,'pack more than spread
> more'.
this is ... a dubiously general statement.
for good power, at least on Intel cpus, you want to spread. Parallelism is
efficient.
the only thing you do not want to do, is wake cpus up for
On 8/15/2012 10:03 PM, Alex Shi wrote:
> On 08/16/2012 12:19 AM, Matthew Garrett wrote:
>
>> On Mon, Aug 13, 2012 at 08:21:00PM +0800, Alex Shi wrote:
>>
>>> power aware scheduling), this proposal will adopt the
>>> sched_balance_policy concept and use 2 kind of policy: performance, power.
>>
>>
1 - 100 of 180 matches
Mail list logo