Hi Alex,
On 05/20/2013 06:31 AM, Alex Shi wrote:
>
>> Which are the workloads where 'powersaving' mode hurts workload
>> performance measurably?
>>
>> I ran ebizzy on a 2 socket, 16 core, SMT 4 Power machine.
>
> Is this a 2 * 16 * 4 LCPUs PowerPC machine?
This is a 2 * 8 * 4 LCPUs Pow
> Which are the workloads where 'powersaving' mode hurts workload
> performance measurably?
>
> I ran ebizzy on a 2 socket, 16 core, SMT 4 Power machine.
Is this a 2 * 16 * 4 LCPUs PowerPC machine?
> The power efficiency drops significantly with the powersaving policy of
> this patch,ov
On 04/30/2013 03:26 PM, Mike Galbraith wrote:
> On Tue, 2013-04-30 at 11:49 +0200, Mike Galbraith wrote:
>> On Tue, 2013-04-30 at 11:35 +0200, Mike Galbraith wrote:
>>> On Tue, 2013-04-30 at 10:41 +0200, Ingo Molnar wrote:
>>
Which are the workloads where 'powersaving' mode hurts workload
On Tue, 2013-04-30 at 11:49 +0200, Mike Galbraith wrote:
> On Tue, 2013-04-30 at 11:35 +0200, Mike Galbraith wrote:
> > On Tue, 2013-04-30 at 10:41 +0200, Ingo Molnar wrote:
>
> > > Which are the workloads where 'powersaving' mode hurts workload
> > > performance measurably?
> >
> > Well, it'
On Tue, 2013-04-30 at 11:35 +0200, Mike Galbraith wrote:
> On Tue, 2013-04-30 at 10:41 +0200, Ingo Molnar wrote:
> > Which are the workloads where 'powersaving' mode hurts workload
> > performance measurably?
>
> Well, it'll lose throughput any time there's parallel execution
> potential but i
On Tue, 2013-04-30 at 10:41 +0200, Ingo Molnar wrote:
> * Mike Galbraith wrote:
>
> > On Tue, 2013-04-30 at 07:16 +0200, Mike Galbraith wrote:
> >
> > > Well now, that's not exactly what I expected to see for AIM7 compute.
> > > Filesystem is munching cycles otherwise used for compute when load
* Mike Galbraith wrote:
> On Tue, 2013-04-30 at 07:16 +0200, Mike Galbraith wrote:
>
> > Well now, that's not exactly what I expected to see for AIM7 compute.
> > Filesystem is munching cycles otherwise used for compute when load is
> > spread across the whole box vs consolidated.
>
> So AIM7
On Tue, 2013-04-30 at 07:16 +0200, Mike Galbraith wrote:
> Well now, that's not exactly what I expected to see for AIM7 compute.
> Filesystem is munching cycles otherwise used for compute when load is
> spread across the whole box vs consolidated.
So AIM7 compute performance delta boils down to:
On Fri, 2013-04-26 at 17:11 +0200, Mike Galbraith wrote:
> On Wed, 2013-04-17 at 17:53 -0400, Len Brown wrote:
> > On 04/12/2013 12:48 PM, Mike Galbraith wrote:
> > > On Fri, 2013-04-12 at 18:23 +0200, Borislav Petkov wrote:
> > >> On Fri, Apr 12, 2013 at 04:46:50PM +0800, Alex Shi wrote:
> > >>
On Wed, 2013-04-17 at 17:53 -0400, Len Brown wrote:
> On 04/12/2013 12:48 PM, Mike Galbraith wrote:
> > On Fri, 2013-04-12 at 18:23 +0200, Borislav Petkov wrote:
> >> On Fri, Apr 12, 2013 at 04:46:50PM +0800, Alex Shi wrote:
> >>> Thanks a lot for comments, Len!
> >>
> >> AFAICT, you kinda forgot
On Wed, 2013-04-17 at 17:53 -0400, Len Brown wrote:
> On 04/12/2013 12:48 PM, Mike Galbraith wrote:
> > On Fri, 2013-04-12 at 18:23 +0200, Borislav Petkov wrote:
> >> On Fri, Apr 12, 2013 at 04:46:50PM +0800, Alex Shi wrote:
> >>> Thanks a lot for comments, Len!
> >>
> >> AFAICT, you kinda forgot
On 04/12/2013 12:48 PM, Mike Galbraith wrote:
> On Fri, 2013-04-12 at 18:23 +0200, Borislav Petkov wrote:
>> On Fri, Apr 12, 2013 at 04:46:50PM +0800, Alex Shi wrote:
>>> Thanks a lot for comments, Len!
>>
>> AFAICT, you kinda forgot to answer his most important question:
>>
>>> These numbers sugg
On Wed, Apr 17, 2013 at 09:18:28AM +0800, Alex Shi wrote:
> Sure. Currently if the whole socket get into sleep, but the memory on
> the node is still accessed. the cpu socket still spend some power on
> 'uncore' part. So the further step is reduce the remote memory access
> to save more power, and
On 04/16/2013 06:24 PM, Borislav Petkov wrote:
> On Tue, Apr 16, 2013 at 08:22:19AM +0800, Alex Shi wrote:
>> testing has a little variation, but the power data is quite accurate.
>> I may change to packing tasks per cpu capacity than current cpu
>> weight. that should has better power efficient va
On Tue, Apr 16, 2013 at 08:22:19AM +0800, Alex Shi wrote:
> testing has a little variation, but the power data is quite accurate.
> I may change to packing tasks per cpu capacity than current cpu
> weight. that should has better power efficient value.
Yeah, this probably needs careful measuring -
On 04/16/2013 07:12 AM, Borislav Petkov wrote:
> On Mon, Apr 15, 2013 at 09:50:22PM +0800, Alex Shi wrote:
>> For fairness and total threads consideration, powersaving cost quit
>> similar energy on kbuild benchmark, and even better.
>>
>> 17348.850 27400.458 159
On Mon, Apr 15, 2013 at 09:50:22PM +0800, Alex Shi wrote:
> For fairness and total threads consideration, powersaving cost quit
> similar energy on kbuild benchmark, and even better.
>
> 17348.850 27400.458 15973.776
> 13737.493 18487.24
On 04/15/2013 05:52 PM, Borislav Petkov wrote:
> On Mon, Apr 15, 2013 at 02:16:55PM +0800, Alex Shi wrote:
>> And I need to say again. the powersaving policy just effect on system
>> under utilisation. when system goes busy, it won't has effect.
>> performance oriented policy will take over balance
On Mon, Apr 15, 2013 at 02:16:55PM +0800, Alex Shi wrote:
> And I need to say again. the powersaving policy just effect on system
> under utilisation. when system goes busy, it won't has effect.
> performance oriented policy will take over balance behaviour.
And AFACU your patches, you do this aut
On 04/15/2013 02:04 PM, Alex Shi wrote:
> On 04/14/2013 11:59 PM, Borislav Petkov wrote:
>> > On Sun, Apr 14, 2013 at 09:28:50AM +0800, Alex Shi wrote:
>>> >> Even some scenario the total energy cost more, at least the avg watts
>>> >> dropped in that scenarios.
>> >
>> > Ok, what's wrong with x =
On 04/14/2013 11:59 PM, Borislav Petkov wrote:
> On Sun, Apr 14, 2013 at 09:28:50AM +0800, Alex Shi wrote:
>> Even some scenario the total energy cost more, at least the avg watts
>> dropped in that scenarios.
>
> Ok, what's wrong with x = 32 then? So basically if you're looking at
> avg watts, yo
On Sun, Apr 14, 2013 at 09:28:50AM +0800, Alex Shi wrote:
> Even some scenario the total energy cost more, at least the avg watts
> dropped in that scenarios.
Ok, what's wrong with x = 32 then? So basically if you're looking at
avg watts, you don't want to have more than 16 threads, otherwise
powe
On 04/14/2013 09:28 AM, Alex Shi wrote:
>> > These numbers suggest that this patch series simultaneously
>> > has a negative impact on performance and energy required
>> > to retire the workload. Why do it?
> Even some scenario the total energy cost more, at least the avg watts
> dr
On 04/13/2013 01:12 AM, Borislav Petkov wrote:
> On Fri, Apr 12, 2013 at 06:48:31PM +0200, Mike Galbraith wrote:
>> (just saying there are other aspects besides joules in there)
>
> Yeah, but we don't allow any regressions in sched*, do we? Can we pick
> only the good cherries? :-)
>
Thanks for
On 04/13/2013 12:23 AM, Borislav Petkov wrote:
> On Fri, Apr 12, 2013 at 04:46:50PM +0800, Alex Shi wrote:
>> > Thanks a lot for comments, Len!
> AFAICT, you kinda forgot to answer his most important question:
>
>> > These numbers suggest that this patch series simultaneously
>> > has a negative i
On Fri, Apr 12, 2013 at 06:48:31PM +0200, Mike Galbraith wrote:
> (just saying there are other aspects besides joules in there)
Yeah, but we don't allow any regressions in sched*, do we? Can we pick
only the good cherries? :-)
--
Regards/Gruss,
Boris.
Sent from a fat crate under my desk. Fo
On Fri, 2013-04-12 at 18:23 +0200, Borislav Petkov wrote:
> On Fri, Apr 12, 2013 at 04:46:50PM +0800, Alex Shi wrote:
> > Thanks a lot for comments, Len!
>
> AFAICT, you kinda forgot to answer his most important question:
>
> > These numbers suggest that this patch series simultaneously
> > has
On Fri, Apr 12, 2013 at 04:46:50PM +0800, Alex Shi wrote:
> Thanks a lot for comments, Len!
AFAICT, you kinda forgot to answer his most important question:
> These numbers suggest that this patch series simultaneously
> has a negative impact on performance and energy required
> to retire the work
On 04/12/2013 05:02 AM, Len Brown wrote:
>> > x = 16 299.915 /43 77 259.127 /58 66
> Are you sure that powersave mode ran in 43 seconds
> when performance mode ran in 58 seconds?
Thanks a lot for comments, Len!
Will do more testing by your tool fspin. :)
powersaving using less time wh
On 04/03/2013 10:00 PM, Alex Shi wrote:
> As mentioned in the power aware scheduling proposal, Power aware
> scheduling has 2 assumptions:
> 1, race to idle is helpful for power saving
> 2, less active sched groups will reduce cpu power consumption
linux...@vger.kernel.org should be cc:
on Linux
Many many thanks for Namhyung, PJT, Vicent and Preeti's comments and suggestion!
This version included the following changes:
a, remove the patch 3th to recover the runnable load avg recording on rt
b, check avg_idle for each cpu wakeup burst not only the waking CPU.
c, fix select_task_rq_fair retu
31 matches
Mail list logo