The changelog has missed mentioning the introduction of sd_asym per_cpu sched
domain.
Apologies for this. The patch with the changelog including mention of sd_asym is
pasted below.
Regards
Preeti U Murthy
---
sched: Remove un-necessary iteration over sched domains to update
Hi Kamalesh,
On 10/30/2013 02:53 PM, Kamalesh Babulal wrote:
> Hi Preeti,
>
>> nr_busy_cpus parameter is used by nohz_kick_needed() to find out the number
>> of busy cpus in a sched domain which has SD_SHARE_PKG_RESOURCES flag set.
>> Therefore instead of updating nr_
bc_cpu is woken up by
an IPI so as to queue the above mentioned hrtimer on itself.
This patch is compile tested only.
Signed-off-by: Preeti U Murthy
---
include/linux/clockchips.h |4 +
kernel/time/clockevents.c|8 +-
kernel/time/tick-broadcast.c | 157
Hi Ben,
On 12/13/2013 10:47 AM, Benjamin Herrenschmidt wrote:
> On Fri, 2013-12-13 at 09:49 +0530, Preeti U Murthy wrote:
>> On some architectures, in certain CPU deep idle states the local timers stop.
>> An external clock device is used to wakeup these CPUs. The
Hi,
The patch had some compile time fixes to be done. It was accidentally mailed
out before doing so. Below is the right patch. Apologies for the same.
Thanks
Regards
Preeti U Murthy
-
time: Support in tick broadcast
en though they degrade with time
and sgs->utils accounts for them. Therefore,
for core1 and core2, the sgs->utils will be slightly above 100 and the
above condition will fail, thus failing them as candidates for
group_leader,since threshold_util will be 200.
This phenomenon is seen for bala
; it'll likely stack the whole thing on a CPU or two, if so, it'll hurt)
At this point, I would like to raise one issue.
*Is the goal of the power aware scheduler improving power efficiency of
the scheduler or a compromise on the power efficiency but definitely a
decrease in power consumption, since it
flexible enough to do this and
that we must cash in on it.
Thanks
Regards
Preeti U Murthy
>
> Vincent
>
> On 26 March 2013 15:42, Peter Zijlstra wrote:
>> On Tue, 2013-03-26 at 15:03 +0100, Vincent Guittot wrote:
>>>> But ha! here's your NO_HZ link.. but doe
the following points again.
Thanks
Regards
Preeti U Murthy
On 04/23/2013 01:27 AM, Vincent Guittot wrote:
> On Monday, 22 April 2013, Preeti U Murthy wrote:
>> Hi Vincent,
>>
>> On 04/05/2013 04:38 PM, Vincent Guittot wrote:
>>> Peter,
>>>
>>> Aft
Hi Alex,
I have one point below.
On 04/23/2013 07:53 AM, Alex Shi wrote:
> Thanks you, Preeti and Vincent to talk the power aware scheduler for
> details! believe this open discussion is helpful to conduct a a more
> comprehensive solution. :)
>
>> Hi Preeti,
>>
>
ing.
> + *
> + * When enqueue a new forked task, the se->avg.decay_count == 0, so
> + * we bypass update_entity_load_avg(), use avg.load_avg_contrib initial
> + * value: se->load.weight.
>*/
> if (unlikely(se->avg.decay_count <= 0)) {
>
Hi Alex,
You can add my Reviewed-by for the below patch.
Thanks
Regards
Preeti U Murthy
On 04/04/2013 07:30 AM, Alex Shi wrote:
> The cpu's utilization is to measure how busy is the cpu.
> util = cpu_rq(cpu)->avg.runnable_avg_sum * SCHED_POEWR_SCALE
>
Hi Alex,
You can add my Reviewed-by for the below patch.
Thanks
Regards
Preeti U Murthy
On 04/04/2013 07:30 AM, Alex Shi wrote:
> In power aware scheduling, we don't want to balance 'prefer_sibling'
> groups just because local group has capacity.
> If the local group has no tasks at
Hi Alex,
You might want to do the below for struct sched_entity also?
AFAIK,struct sched_entity has struct sched_avg under CONFIG_SMP.
Regards
Preeti U Murthy
On 05/06/2013 07:15 AM, Alex Shi wrote:
> The following variables were covered under CONFIG_SMP in struct cfs_rq.
> but similar ru
regression.
The below patch is a substitute for patch 7.
---
sched: Modify effective_load() to use runnable load average
From: Preeti U Murthy
The runqueue weight distribution should update the runnable load average
forth another question,should we modify wake_affine()
to pass the runnable load average of the waking up task to effective_load().
What do you think?
Thanks
Regards
Preeti U Murthy
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message
cfs_rq
under CONFIG_SMP, how will tg->load_avg get updated? tg->load_avg is not
SMP dependent.
tg->load_avg in-turn is used to decide the CPU shares of the sched
entities on the processor right?
Thanks
Regards
Preeti U Murthy
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
ed tasks.
>> enqueue_task_fair->update_entity_load_avg() during the second
>> iteration.But __update_entity_load_avg() in update_entity_load_avg()
>>
>
> When goes 'enqueue_task_fair->update_entity_load_avg()' during the
> second iteration. the se is changed.
> That is dif
Hi Alex,
On 03/21/2013 01:13 PM, Alex Shi wrote:
> On 03/20/2013 12:57 PM, Preeti U Murthy wrote:
>> Neither core will be able to pull the task from the other to consolidate
>> the load because the rq->util of t2 and t4, on which no process is
>> running, continue to show
On 03/21/2013 02:57 PM, Alex Shi wrote:
> On 03/21/2013 04:41 PM, Preeti U Murthy wrote:
>>>>
>> Yes, I did find this behaviour on a 2 socket, 8 core machine very
>> consistently.
>>
>> rq->util cannot go to 0, after it has begun accumulating load right?
Hi,
On 03/22/2013 07:00 AM, Alex Shi wrote:
> On 03/21/2013 06:27 PM, Preeti U Murthy wrote:
>>>> did you close all of background system services?
>>>> In theory the rq->avg.runnable_avg_sum should be zero if there is no
>>>> task a bit long, otherwise t
merged into one, since both of
them are having the common goal of packing small tasks.
Thanks
Regards
Preeti U Murthy
On 03/22/2013 05:55 PM, Vincent Guittot wrote:
> Hi,
>
> This patchset takes advantage of the new per-task load tracking that is
> available in the kernel for packi
On 02/19/2014 12:10 AM, Thomas Gleixner wrote:
> On Tue, 18 Feb 2014, Preeti Murthy wrote:
>
>> Hi Thomas,
>>
>> With regard to the patch: "tick: Clear broadcast pending bit when
>> switching to oneshot"
>> isn't BROADCAST_EXIT called atleast after in
broadcast fails we should not be tracing either.
2. Moving the trace after the cpuidle_enter() call is wrong.
So I would suggest the patch at the end of this mail as the alternative
to this one so as to get around the patching conflict.
Thanks
Regards
Preeti U Murthy
>
> Thomas,
since you would have done BROADCAST_ENTRY and if this call
to the broadcast framework succeeds, you will have to do a
BROADCAST_EXIT irrespective of if the driver could put the CPU to that
idle state or not. So even if cpuidle_enter() fails, you will need to do
a clockevents_notify(CLOCK_EVT
e
> 3. reflect the idle state
>
> The cpuidle_idle_call calls these three functions to implement the main
> idle entry function.
>
> Signed-off-by: Daniel Lezcano
> Acked-by: Nicolas Pitre
> ---
>
> ChangeLog:
>
> V3:
> * moved broadcast timer outside of cpuidle_enter() a
r sharing) but it can become complex if we
> want to add more.
What if we want to add arch specific flags to the NUMA domain? Currently
with Peter's patch:https://lkml.org/lkml/2013/11/5/239 and this patch,
the arch can modify the sd flags of the topology levels till just before
the NUMA domain.
On 01/07/2014 03:20 PM, Peter Zijlstra wrote:
> On Tue, Jan 07, 2014 at 03:10:21PM +0530, Preeti U Murthy wrote:
>> What if we want to add arch specific flags to the NUMA domain? Currently
>> with Peter's patch:https://lkml.org/lkml/2013/11/5/239 and this patch,
>> the arch ca
On 01/07/2014 04:43 PM, Peter Zijlstra wrote:
> On Tue, Jan 07, 2014 at 04:09:39PM +0530, Preeti U Murthy wrote:
>> On 01/07/2014 03:20 PM, Peter Zijlstra wrote:
>>> On Tue, Jan 07, 2014 at 03:10:21PM +0530, Preeti U Murthy wrote:
>>>> What if we want to add arch spe
On 01/07/2014 06:01 PM, Vincent Guittot wrote:
> On 7 January 2014 11:39, Preeti U Murthy wrote:
>> On 01/07/2014 03:20 PM, Peter Zijlstra wrote:
>>> On Tue, Jan 07, 2014 at 03:10:21PM +0530, Preeti U Murthy wrote:
>>>> What if we want to add arch specific flags
endif
> + { cpu_cpu_mask, SD_INIT_NAME(DIE) },
> + { NULL, },
> +};
> +
> +struct sched_domain_topology_level *sched_domain_topology = default_topology;
> +
> +#define for_each_sd_topology(tl) \
> + for (tl = sched_domain_topology; tl->mask; t
return 0*SD_ASYM_PACKING;
> -}
> -
> /*
> * Initializers for schedule domains
> * Non-inlined to reduce accumulated stack pressure in build_sched_domains()
> @@ -6018,7 +6013,6 @@ sd_init(struct sched_domain_topology_level *tl, int cpu)
> if (sd->fla
.
I don't see this flag being set either in sd_init() or in
default_topology[]. Should not the default_topology[] flag setting
routines set this flag at every level of sched domain along with other
topology flags, unless the arch wants to override it?
Regards
Preeti U Murthy
> This flag is part of
On 03/18/2014 05:14 PM, Kirill Tkhai wrote:
>
>
> 18.03.2014, 15:08, "Preeti Murthy" :
>> On Sat, Mar 15, 2014 at 3:44 AM, Kirill Tkhai wrote:
>>
>>> {inc,dec}_rt_tasks used to count entities which are directly queued
>>> on rt_rq. If an en
On 03/19/2014 03:22 PM, Vincent Guittot wrote:
> On 19 March 2014 07:21, Preeti U Murthy wrote:
>> Hi Vincent,
>>
>> On 03/18/2014 11:26 PM, Vincent Guittot wrote:
>>> A new flag SD_SHARE_POWERDOMAIN is created to reflect whether groups of CPUs
>>> i
pci_root_bus_resources(int bus, struct list_head *resources);
>
> -#ifdef CONFIG_SMP
> -#define mc_capable() ((boot_cpu_data.x86_max_cores > 1) && \
> - (cpumask_weight(cpu_core_mask(0)) != nr_cpu_ids))
> -#define smt_capable()(
Hi Daniel,
Thank you very much for the review.
On 02/11/2014 03:46 PM, Daniel Lezcano wrote:
> On 02/07/2014 09:06 AM, Preeti U Murthy wrote:
>> From: Thomas Gleixner
>>
>> On some architectures, in certain CPU deep idle states the local
>> timers stop.
>>
the patch which should fix this. This is based on top of tip-tree.
Thanks
Regards
Preeti U Murthy
-
cpuidle/pseries: Fix fallout caused due to cleanup in pseries cpuidle backend
driver
From: Preeti U Murthy
C
else
> - entered_state = cpuidle_enter_state(dev, drv, next_state);
> -
> - if (broadcast)
> - clockevents_notify(CLOCK_EVT_NOTIFY_BROADCAST_EXIT, >cpu);
> + entered_state = cpuidle_enter(drv, dev, next_state);
>
> trace_cpu_idle_rcuidle(PWR_
c.. so that we can expect the
governor and driver to take better decisions about entry and exit into
idle states. Is this the advantage we hope to begin with?
Thanks
Regards
Preeti U Murthy
>
> Signed-off-by: Daniel Lezcano
> Acked-by: Nicolas Pitre
> ---
--
To unsubscribe from this li
ling functions
into it would result in some confusion and add more code than it is
meant to handle. This will avoid having to add comments in the
cpuidle_idle_call() function as currently being done in Patch[5/5], to
clarify what each function is meant to do.
So IMO, Patches[1/5] and [2/5] by themselves a
Hi,
On 02/13/2014 01:15 PM, Alex Shi wrote:
> On 02/11/2014 07:11 PM, Daniel Lezcano wrote:
>> On 02/10/2014 10:24 AM, Preeti Murthy wrote:
>>> HI Daniel,
>>>
>>> Isn't the only scenario where another cpu can put an idle task on
>>> our runqueue,
>
Hi Daniel,
On 02/11/2014 05:37 PM, Daniel Lezcano wrote:
> On 02/10/2014 11:04 AM, Preeti Murthy wrote:
>> Hi Daniel,
>>
>> On Fri, Feb 7, 2014 at 4:40 AM, Daniel Lezcano
>> wrote:
>>> The idle_balance modifies the idle_stamp field of the rq, making this
Hi Nicolas,
You will have to include the below patch with yours. You
could squash the two I guess, I have added the changelog
just for clarity. And you also might want to change the subject to
cpuidle/powernv. It gives a better picture.
Thanks
Regards
Preeti U Murthy
cpuidle/powernv: Add
Hi Nicolas,
On 02/07/2014 06:47 AM, Nicolas Pitre wrote:
> On Thu, 6 Feb 2014, Preeti U Murthy wrote:
>
>> Hi Daniel,
>>
>> On 02/06/2014 09:55 PM, Daniel Lezcano wrote:
>>> Hi Nico,
>>>
>>>
>>> On 6 February 2014 14:16, Nico
local_irq_enable() since we are in the call path of
cpuidle driver and that explicitly enables irqs on exit from
idle states.
On 02/07/2014 06:47 AM, Nicolas Pitre wrote:
> On Thu, 6 Feb 2014, Preeti U Murthy wrote:
>
>> Hi Daniel,
>>
>> On 02/06/2014 09:55 PM, Daniel Le
on the idea discussed here:
http://www.kernelhub.org/?p=2=399516
Changes in V4:
1. Cleared the stand by CPU from the oneshot mask. As a result PATCH 3/3
was simplified.
2. Fixed compile time warnings.
---
Preeti U Murthy (2):
time: Change the return type of clockevents_notify() to integer
. For such
a CPU, the BROADCAST_ENTER notification has to fail indicating that its clock
device cannot be shutdown. To make way for this support, change the return
type of tick_broadcast_oneshot_control() and hence clockevents_notify() to
indicate such scenarios.
Signed-off-by: Preeti U Murthy
we are in no further position to take a decision on an alternative
idle state to enter into.
Signed-off-by: Preeti U Murthy
---
drivers/cpuidle/cpuidle.c | 14 --
1 file changed, 8 insertions(+), 6 deletions(-)
diff --git a/drivers/cpuidle/cpuidle.c b/drivers/cpuidle/cpuidle.c
as well by moving the hrtimer on to the CPU handling the
CPU_DEAD
notification.
Signed-off-by: Preeti U Murthy
[Added Changelog and code to handle reprogramming of hrtimer]
---
include/linux/clockchips.h |9 +++
kernel/time/Makefile |2 -
kernel/time/tick
Hi Deepthi,
On 02/07/2014 03:15 PM, Deepthi Dharwar wrote:
> Hi Preeti,
>
> Thanks for the patch.
>
> On 02/07/2014 12:31 PM, Preeti U Murthy wrote:
>> Hi Nicolas,
>>
>> Find below the patch that will need to be squashed with this one.
>> This patch
Hi Nicolas,
On 02/07/2014 04:18 PM, Nicolas Pitre wrote:
> On Fri, 7 Feb 2014, Preeti U Murthy wrote:
>
>> Hi Nicolas,
>>
>> On 02/07/2014 06:47 AM, Nicolas Pitre wrote:
>>>
>>> What about creating arch_cpu_idle_enter() and arch_cpu_idle_exit() in
The broadcast timer registration has to be done only when
GENERIC_CLOCKEVENTS_BROADCAST and TICK_ONESHOT config options are enabled.
Also fix max_delta_ticks value for the pseudo clock device.
Reported-by: Fengguang Wu
Signed-off-by: Preeti U Murthy
Cc: Thomas Gleixner
Cc: Ingo Molnar
Hi Thomas,
On 02/07/2014 11:27 PM, Thomas Gleixner wrote:
> On Fri, 7 Feb 2014, Preeti U Murthy wrote:
>
>> The broadcast timer registration has to be done only when
>> GENERIC_CLOCKEVENTS_BROADCAST and TICK_ONESHOT config options are enabled.
>
> Then we should com
Hi David,
I have sent out a revised patch on
https://lkml.org/lkml/2014/2/9/2. Can you let me
know if this works for you?
Thanks
Regards
Preeti U Murthy
On 02/09/2014 01:01 PM, David Rientjes wrote:
> On Fri, 7 Feb 2014, Preeti U Murthy wrote:
>
>> The broadcast timer regi
deep
idle states on powerpc.
The patchset has been appended by a RESEND tag since nothing has changed from
the previous post except for an added config condition around
tick_broadcast() which handles sending broadcast IPIs, and the update in the
cover
letter.
---
Preeti U Murthy (1):
cpuidle
[Functions renamed to tick_broadcast* and Changelog modified by
Preeti U. Murthy]
Signed-off-by: Preeti U. Murthy
Acked-by: Geoff Levand [For the PS3 part]
---
arch/powerpc/include/asm/smp.h |2 +-
arch/powerpc/include/asm/time.h |1 +
arch/powerpc/kernel/smp.c
From: Preeti U Murthy
Split timer_interrupt(), which is the local timer interrupt handler on ppc
into routines called during regular interrupt handling and __timer_interrupt(),
which takes care of running local timers and collecting time related stats.
This will enable callers interested only
slots are available).
So, implement the functionality of PPC_MSG_CALL_FUNC_SINGLE using
PPC_MSG_CALL_FUNC itself and release its IPI message slot, so that it can be
used for something else in the future, if desired.
Signed-off-by: Srivatsa S. Bhat
Signed-off-by: Preeti U. Murthy
Acked-by: Geoff
Hi Peter,
On 02/07/2014 06:11 PM, Peter Zijlstra wrote:
> On Fri, Feb 07, 2014 at 05:11:26PM +0530, Preeti U Murthy wrote:
>> But observe the idle state "snooze" on powerpc. The power that this idle
>> state saves is through the lowering of the thread priority of th
us of the lower domains. As far as I see, this patch does not change
these assumptions. Hence I am unable to imagine a scenario when the
parent might not include all cpus of its children domain. Do you have
such a scenario in mind which can arise due to this patch ?
Thanks
Regards
Preeti U Murthy
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
mode to
periodic.
Signed-off-by: Preeti U Murthy
---
include/linux/clockchips.h |4 -
kernel/time/clockevents.c|8 +-
kernel/time/tick-broadcast.c | 180 ++
kernel/time/tick-internal.h |8 +-
4 files changed, 173 insertions(+), 27
Hi Soren,
On 09/13/2013 03:50 PM, Preeti Murthy wrote:
> Hi,
>
> So the patch that Daniel points out http://lwn.net/Articles/566270/ ,
> enables broadcast functionality
> without using an external global clock device. It uses one of the per cpu
> clock devices to en
Hi Soren,
On 09/13/2013 09:53 PM, Sören Brinkmann wrote:
> Hi Preeti,
> Thanks for the explanation but now I'm a little confused. That's a lot of
> details and I'm lacking the in depth knowledge to fully understand
> everything.
>
> Is it correct to say, that your patch seri
better power numbers can be obtained or at-least the default power
efficiency of the kernel will show up.
However adding the new patchsets like packing small tasks, heterogeneous
scheduling, power aware scheduling etc.. *should* then yield good and
consistent power savings since they now stand o
vatsa S. Bhat and
Vaidyanathan Srinivasan for all their comments and suggestions so far.
---
Preeti U Murthy (4):
cpuidle/ppc: Split timer_interrupt() into timer handling and interrupt
handling routines
cpuidle/ppc: Add basic infrastructure to support the broadcast framework
on ppc
c
are available).
So, implement the functionality of PPC_MSG_CALL_FUNC using
PPC_MSG_CALL_FUNC_SINGLE itself and release its IPI message slot, so that it
can be used for something else in the future, if desired.
Signed-off-by: Srivatsa S. Bhat
Signed-off-by: Preeti U Murthy
---
arch/powerpc/include
() into routines performed during regular
interrupt handling and __timer_interrupt(), which takes care of running local
timers and collecting time related stats. Now on a broadcast ipi, call
__timer_interrupt().
Signed-off-by: Preeti U Murthy
---
arch/powerpc/kernel/time.c | 69
[Changelog modified by pre...@linux.vnet.ibm.com]
Signed-off-by: Preeti U Murthy
---
arch/powerpc/include/asm/smp.h |3 ++-
arch/powerpc/include/asm/time.h |1 +
arch/powerpc/kernel/smp.c | 19 +++
arch/powerpc/kernel/time.c |4
g woken up from the broadcast ipi, set the
decrementers_next_tb
to now before calling __timer_interrupt().
Signed-off-by: Preeti U Murthy
---
arch/powerpc/Kconfig|1 +
arch/powerpc/include/asm/time.h |1 +
arch/powerpc/kernel/time.c | 69
cycle repeats.
Protect the region of nomination,de-nomination and check for existence of
broadcast
cpu with a lock to ensure synchronization between them.
[1] tick_handle_oneshot_broadcast() or tick_handle_periodic_broadcast().
Signed-off-by: Preeti U Murthy
---
arch/powerpc/include/asm/time.h
was about to fire on it. Therefore the newly nominated broadcast cpu
should set the broadcast hrtimer on itself to expire immediately so as to not
miss wakeups under such scenarios.
Signed-off-by: Preeti U Murthy
---
arch/powerpc/include/asm/time.h |1 +
arch/powerpc/kernel/time.c
ng without any pre-conditions.
In a single socket machine, there will be a CPU domain encompassing the
socket and the MC domain will encompass a core. nohz_idle load balancer
will kick in if both the threads in the core have tasks running on them.
This is fair enough because the threads share th
is done to know the total number of
busy cpus at a sched domain level which has the SD_SHARE_PKG_RESOURCES
set and not at a sched group level.
So why not move nr_busy to struct sched_domain and having the below
patch which just updates this parameter for the sched domain, sd_busy ?
This wil
On 10/23/2013 09:30 AM, Preeti U Murthy wrote:
> Hi Peter,
>
> On 10/23/2013 03:41 AM, Peter Zijlstra wrote:
>> On Mon, Oct 21, 2013 at 05:14:42PM +0530, Vaidyanathan Srinivasan wrote:
>>> kernel/sched/fair.c | 19 +--
>>> 1 file chang
y does. sd_busy therefore is irrelevant for asymmetric load
balancing.
Regards
Preeti U Murthy
START_PATCH---
sched: Fix nohz_kick_needed()
---
kernel/sched/core.c |4
kernel/sched/fair.c | 40 ++--
flags);
>> env.flags |= LBF_ALL_PINNED;
>> +if (share_pkg_res &&
>> + cpumask_intersects(cpus,
>> +to_cpumask(group->
Hi Vincent,
I have addressed your comments and below is the fresh patch. This patch
applies on PATCH 2/3 posted in this thread.
Regards
Preeti U Murthy
sched:Remove un-necessary iterations over sched domains to update/query
nr_busy_cpus
From: Preeti U Murthy
nr_busy_cpus parameter is used
flags);
>> env.flags |= LBF_ALL_PINNED;
>> +if (share_pkg_res &&
>> +cpumask_intersects(cpus,
>> +to_cpumask(group
kernbench there was no significant change in the observation.
I will try patch V2 and let you know the results.
Regards
Preeti U Murthy
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at ht
9.98
16 20.46
Let me know if you want me to profile any of these runs for specific
statistics.
Regards
Preeti U Murthy
On 07/20/2013 12:58 AM, Jason Low wrote:
> On Fri, 2013-07-19 at 16:54 +0530, Preeti U Murthy wrote:
>> Hi Json,
>>
e fundamental issue that
we need to resolve in the steps towards better power savings through
scheduler.
Regards
Preeti U Murthy
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
> It would be good to have even a high level agreement on the path forward
> where the expectation first and foremost is to take advantage of the
> schedulers ideal position to drive the power management while
> simplifying the power management code.
>
> Thanks,
> Morten
>
Reg
would need certain cpus in that domain idle.
3. Are the domains in which we pack tasks power gated?
4. Will there be significant performance drop by packing? Meaning do the
tasks share cpu resources? If they do there will be severe contention.
The approach I suggest therefore would be to get the scheduler well
Hi Rafael,
On 06/08/2013 07:32 PM, Rafael J. Wysocki wrote:
> On Saturday, June 08, 2013 12:28:04 PM Catalin Marinas wrote:
>> On Fri, Jun 07, 2013 at 07:08:47PM +0100, Preeti U Murthy wrote:
>>> On 06/07/2013 08:21 PM, Catalin Marinas wrote:
>>>> I think you
Hi Catalin,
On 06/08/2013 04:58 PM, Catalin Marinas wrote:
> On Fri, Jun 07, 2013 at 07:08:47PM +0100, Preeti U Murthy wrote:
>> On 06/07/2013 08:21 PM, Catalin Marinas wrote:
>>> I think you are missing Ingo's point. It's not about the scheduler
>>> complying wit
Hi David,
On 06/07/2013 11:06 PM, David Lang wrote:
> On Fri, 7 Jun 2013, Preeti U Murthy wrote:
>
>> Hi Catalin,
>>
>> On 06/07/2013 08:21 PM, Catalin Marinas wrote:
>
>>> Take the cpuidle example, it uses the load average of the CPUs,
>>> howe
he scheduler or a compromise on the power efficiency but definitely a
>> decrease in power consumption, since it is the user who has decided to
>> prioritise lower power consumption over performance* ?
>>
>
> It could be one of reason for this feather, but I could like to
pdate the load itself,it needs to
reflect full utilization.In __update_entity_runnable_avg both
runnable_avg_period and runnable_avg_sum get equally incremented for a
forked task since it is runnable.Hence where is the chance for the load
to get incremented in steps?
In sleeping tasks since ru
e ups have load updates to
do.Forked tasks just got created,they have no load to "update" but only
to "create". This I feel is rightly done in sched_fork by this patch.
So ideally I dont think we should have any comment here.It does not
sound relevant.
>*/
> if (u
smaller.
2.Balance on nr->running only if you detect burst wakeups.
Alex, you had released a patch earlier which could detect this right?
Instead of balancing on nr_running all the time, why not balance on it
only if burst wakeups are detected. By doing so you ensure that
nr_running as a metric for load balancing is used when it is right to do
so and the reason to use it also gets well documented.
Regards
Preeti U Murthy
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Hi Peter,
On 04/26/2013 03:48 PM, Peter Zijlstra wrote:
> On Wed, Mar 27, 2013 at 03:51:51PM +0530, Preeti U Murthy wrote:
>> Hi,
>>
>> On 03/26/2013 05:56 PM, Peter Zijlstra wrote:
>>> On Fri, 2013-03-22 at 13:25 +0100, Vincent Guittot wrote:
>>&
al load avg of new task same as its load
-* in order to avoid brust fork make few cpu too heavier
-*/
- if (flags & ENQUEUE_NEWTASK)
- se->avg.load_avg_contrib = se->load.weight;
cfs_rq->runnable_load_avg += se->avg.load_avg_contrib;
e of per entity load tracking can
be done without considering the real time tasks?
Regards
Preeti U Murthy
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
what is the right
metric to use here.
Refer to this discussion:https://lkml.org/lkml/2012/10/29/448
Regards
Preeti U Murthy
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http
the menu governor criteria to be chosen as the
next idle state.
This patch adds the code to indicate that a valid cpu idle state could not be
chosen by the menu governor and reports back to arch so that it can take some
default action.
Signed-off-by: Preeti U Murthy
---
drivers/cpuidle/cpuidle.c
Hi Srivatsa,
On 01/14/2014 12:30 PM, Srivatsa S. Bhat wrote:
> On 01/14/2014 11:35 AM, Preeti U Murthy wrote:
>> On PowerPC, in a particular test scenario, all the cpu idle states were
>> disabled.
>> Inspite of this it was observed that the idle state count of the sha
On 01/14/2014 01:07 PM, Srivatsa S. Bhat wrote:
> On 01/14/2014 12:30 PM, Srivatsa S. Bhat wrote:
>> On 01/14/2014 11:35 AM, Preeti U Murthy wrote:
>>> On PowerPC, in a particular test scenario, all the cpu idle states were
>>> disabled.
>>> Inspite of this
ick a broadcast CPU, instead of having a dedicated one.
2. Remove the constraint of having to disable tickless idle on the broadcast
CPU by queueing a hrtimer dedicated to do broadcast.
V1 posting: https://lkml.org/lkml/2013/7/25/740.
1. Added the infrastructure to wakeup CPUs in deep idle st
slots are available).
So, implement the functionality of PPC_MSG_CALL_FUNC_SINGLE using
PPC_MSG_CALL_FUNC itself and release its IPI message slot, so that it can be
used for something else in the future, if desired.
Signed-off-by: Srivatsa S. Bhat
Signed-off-by: Preeti U. Murthy
Acked-by: Geoff
601 - 700 of 1331 matches
Mail list logo