[RFC PATCH V2 15/19] sched: pull all tasks from source grp and no balance for prefer_sibling

2014-08-11 Thread Preeti U Murthy
group has no tasks at the time, that is the power balance hope so. Signed-off-by: Alex Shi alex@intel.com [Added CONFIG_SCHED_POWER switch to enable this patch] Signed-off-by: Preeti U Murthy pre...@linux.vnet.ibm.com --- kernel/sched/fair.c | 51

[RFC PATCH V2 14/19] sched: add power/performance balance allow flag

2014-08-11 Thread Preeti U Murthy
@intel.com [Added CONFIG_SCHED_POWER switch to enable this patch] Signed-off-by: Preeti U Murthy pre...@linux.vnet.ibm.com --- kernel/sched/fair.c |8 1 file changed, 8 insertions(+) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index e7a677e..f9b2a21 100644 --- a/kernel/sched

[RFC PATCH V2 16/19] sched: add new members of sd_lb_stats

2014-08-11 Thread Preeti U Murthy
utilizations of group_min Signed-off-by: Alex Shi alex@intel.com Signed-off-by: Preeti U Murthy pre...@linux.vnet.ibm.com --- kernel/sched/fair.c |4 1 file changed, 4 insertions(+) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index fd93eaf..6d40aa3 100644 --- a/kernel

[RFC PATCH V2 17/19] sched: power aware load balance

2014-08-11 Thread Preeti U Murthy
load balance code. Signed-off-by: Alex Shi alex@intel.com [Added CONFIG_SCHED_POWER switch to enable this patch] Signed-off-by: Preeti U Murthy pre...@linux.vnet.ibm.com --- kernel/sched/fair.c | 126 +++ 1 file changed, 125 insertions(+), 1

[RFC PATCH V2 18/19] sched: lazy power balance

2014-08-11 Thread Preeti U Murthy
-off-by: Preeti U Murthy pre...@linux.vnet.ibm.com --- include/linux/sched.h |4 ++- kernel/sched/fair.c | 70 + 2 files changed, 61 insertions(+), 13 deletions(-) diff --git a/include/linux/sched.h b/include/linux/sched.h index 009da6a

[RFC PATCH V2 19/19] sched: don't do power balance on share cpu power domain

2014-08-11 Thread Preeti U Murthy
From: Alex Shi alex@intel.com Packing tasks among such domain can't save power, just performance losing. So no power balance on them. Signed-off-by: Alex Shi alex@intel.com [Added CONFIG_SCHED_POWER switch to enable this patch] Signed-off-by: Preeti U Murthy pre...@linux.vnet.ibm.com

Re: [PATCH v3 01/12] sched: fix imbalance flag reset

2014-07-10 Thread Preeti U Murthy
Hi Peter, Vincent, On 07/10/2014 02:44 PM, Vincent Guittot wrote: > On 9 July 2014 12:43, Peter Zijlstra wrote: >> On Wed, Jul 09, 2014 at 09:24:54AM +0530, Preeti U Murthy wrote: > > [snip] > >> >>> Continuing with the above explanation; when LBF_ALL_P

Re: [PATCH v4 ] sched: fix imbalance flag reset

2014-07-10 Thread Preeti U Murthy
gt;imbalance; > + if (*group_imbalance) > + *group_imbalance = 0; > + } > + > +out_all_pinned: > + /* > + * We reach balance because all tasks are pinned at this level so > + * we can't migrate them. Let the imbalance fl

Re: [PATCH v4 ] sched: fix imbalance flag reset

2014-07-10 Thread Preeti U Murthy
+ * can try to migrate them. + */ schedstat_inc(sd, lb_balanced[idle]); sd-nr_balance_failed = 0; This patch looks good to me. Reviewed-by: Preeti U Murthy pre...@linux.vnet.ibm.com -- To unsubscribe from this list: send the line unsubscribe linux-kernel in the body

Re: [PATCH v3 01/12] sched: fix imbalance flag reset

2014-07-10 Thread Preeti U Murthy
Hi Peter, Vincent, On 07/10/2014 02:44 PM, Vincent Guittot wrote: On 9 July 2014 12:43, Peter Zijlstra pet...@infradead.org wrote: On Wed, Jul 09, 2014 at 09:24:54AM +0530, Preeti U Murthy wrote: [snip] Continuing with the above explanation; when LBF_ALL_PINNED flag is set,and we jump

Re: [PATCH v3 01/12] sched: fix imbalance flag reset

2014-07-09 Thread Preeti U Murthy
On 07/09/2014 04:13 PM, Peter Zijlstra wrote: > On Wed, Jul 09, 2014 at 09:24:54AM +0530, Preeti U Murthy wrote: >> In the example that I mention above, t1 and t2 are on the rq of cpu0; >> while t1 is running on cpu0, t2 is on the rq but does not have cpu1 in >> its cpus al

Re: [PATCH v3 01/12] sched: fix imbalance flag reset

2014-07-09 Thread Preeti U Murthy
On 07/09/2014 04:13 PM, Peter Zijlstra wrote: On Wed, Jul 09, 2014 at 09:24:54AM +0530, Preeti U Murthy wrote: In the example that I mention above, t1 and t2 are on the rq of cpu0; while t1 is running on cpu0, t2 is on the rq but does not have cpu1 in its cpus allowed mask. So during load

Re: [PATCH v3 01/12] sched: fix imbalance flag reset

2014-07-08 Thread Preeti U Murthy
Hi Vincent, On 07/08/2014 03:42 PM, Vincent Guittot wrote: > On 8 July 2014 05:13, Preeti U Murthy wrote: >> On 06/30/2014 09:35 PM, Vincent Guittot wrote: >>> The imbalance flag can stay set whereas there is no imbalance. >>> >>> Let assume that we have 3

Re: [PATCH v3 01/12] sched: fix imbalance flag reset

2014-07-08 Thread Preeti U Murthy
Hi Vincent, On 07/08/2014 03:42 PM, Vincent Guittot wrote: On 8 July 2014 05:13, Preeti U Murthy pre...@linux.vnet.ibm.com wrote: On 06/30/2014 09:35 PM, Vincent Guittot wrote: The imbalance flag can stay set whereas there is no imbalance. Let assume that we have 3 tasks that run on a dual

Re: [PATCH v3 01/12] sched: fix imbalance flag reset

2014-07-07 Thread Preeti U Murthy
ll is well, there is no imbalance. This is wrong, isn't it? My point is that by clearing the imbalance flag in the out_balanced case, you might be overlooking the fact that the tsk_cpus_allowed mask of the tasks on the src_cpu may not be able to run on the dst_cpu in *this* level of sched_domain, bu

Re: [PATCH v3 01/12] sched: fix imbalance flag reset

2014-07-07 Thread Preeti U Murthy
in *this* level of sched_domain, but can potentially run on a cpu at any higher level of sched_domain. By clearing the flag, we are not encouraging load balance at that level for t2. Am I missing something? Regards Preeti U Murthy -- To unsubscribe from this list: send the line unsubscribe linux-kernel

[PATCH V3] powerpc/powernv: Check for IRQHAPPENED before sleeping

2014-07-01 Thread Preeti U Murthy
This patch fixes ths issue by ensuring that cpus check for pending interrupts just before entering any idle state as long as they are not in the path of split core operations. Signed-off-by: Preeti U Murthy --- Chanes in V2: https://lkml.org/lkml/2014/7/1/3 Modified the changelog to add t

[PATCH V3] powerpc/powernv: Check for IRQHAPPENED before sleeping

2014-07-01 Thread Preeti U Murthy
fixes ths issue by ensuring that cpus check for pending interrupts just before entering any idle state as long as they are not in the path of split core operations. Signed-off-by: Preeti U Murthy pre...@linux.vnet.ibm.com --- Chanes in V2: https://lkml.org/lkml/2014/7/1/3 Modified the changelog

[PATCH V2] powerpc/powernv: Check for IRQHAPPENED before sleeping

2014-06-30 Thread Preeti U Murthy
e doorbell IPI complaining that the sleeping cpu is stuck. This patch fixes these issues by ensuring that cpus check for pending interrupts just before entering any idle state as long as they are not in the path of split core operations. Signed-off-by: Preeti U Murthy Acked-by: Michael Neuling -

[PATCH] powerpc/powernv: Check for IRQHAPPENED before sleeping

2014-06-30 Thread Preeti U Murthy
Commit 8d6f7c5a: "powerpc/powernv: Make it possible to skip the IRQHAPPENED check in power7_nap()" added code that prevents even cores which enter sleep on idle, from checking for pending interrupts. Fix this. Signed-off-by: Preeti U Murthy --- arch/powerpc/kernel/idle_power7.S |

[PATCH] powerpc/powernv: Check for IRQHAPPENED before sleeping

2014-06-30 Thread Preeti U Murthy
Commit 8d6f7c5a: powerpc/powernv: Make it possible to skip the IRQHAPPENED check in power7_nap() added code that prevents even cores which enter sleep on idle, from checking for pending interrupts. Fix this. Signed-off-by: Preeti U Murthy pre...@linux.vnet.ibm.com --- arch/powerpc/kernel

[PATCH V2] powerpc/powernv: Check for IRQHAPPENED before sleeping

2014-06-30 Thread Preeti U Murthy
IPI complaining that the sleeping cpu is stuck. This patch fixes these issues by ensuring that cpus check for pending interrupts just before entering any idle state as long as they are not in the path of split core operations. Signed-off-by: Preeti U Murthy pre...@linux.vnet.ibm.com Acked

Re: [PATCH v2 11/11] sched: replace capacity by activity

2014-06-02 Thread Preeti U Murthy
sks on 3 cores. So shouldn't the above check have caught it? Regards Preeti U Murthy > > So I think we should be able to fix this by setting PREFER_SIBLING on > the SMT domain, that way we'll get single tasks running on each SMT > domain before filling them up until capacity. >

Re: [PATCH v2 11/11] sched: replace capacity by activity

2014-06-02 Thread Preeti U Murthy
it? Regards Preeti U Murthy So I think we should be able to fix this by setting PREFER_SIBLING on the SMT domain, that way we'll get single tasks running on each SMT domain before filling them up until capacity. Now, its been a while since I looked at PREFER_SIBLING, and I've not yet looked at what

Re: [PATCH v2] arm64: kernel: initialize broadcast hrtimer based clock event device

2014-05-30 Thread Preeti U Murthy
gt; power management capabilities. > > The hrtimer based clock event device is unconditionally registered, but > has the lowest possible rating such that any broadcast-capable HW clock > event device present will be chosen in preference as the tick broadcast > device. > > Cc: Pr

Re: [PATCH] arm64: kernel: initialize broadcast hrtimer based clock event device

2014-05-30 Thread Preeti U Murthy
On 05/29/2014 06:09 PM, Mark Rutland wrote: > Hi Preeti, > > On Thu, May 29, 2014 at 12:04:36PM +0100, Preeti U Murthy wrote: >> Hi Lorenzo, >> >> On 05/29/2014 02:53 PM, Lorenzo Pieralisi wrote: >>> On platforms implementing CPU power management, the CPUidle s

Re: [PATCH] arm64: kernel: initialize broadcast hrtimer based clock event device

2014-05-30 Thread Preeti U Murthy
On 05/29/2014 06:09 PM, Mark Rutland wrote: Hi Preeti, On Thu, May 29, 2014 at 12:04:36PM +0100, Preeti U Murthy wrote: Hi Lorenzo, On 05/29/2014 02:53 PM, Lorenzo Pieralisi wrote: On platforms implementing CPU power management, the CPUidle subsystem can allow CPUs to enter idle states

Re: [PATCH v2] arm64: kernel: initialize broadcast hrtimer based clock event device

2014-05-30 Thread Preeti U Murthy
is unconditionally registered, but has the lowest possible rating such that any broadcast-capable HW clock event device present will be chosen in preference as the tick broadcast device. Cc: Preeti U Murthy pre...@linux.vnet.ibm.com Acked-by: Will Deacon will.dea...@arm.com Acked-by: Mark Rutland

Re: [PATCH] arm64: kernel: initialize broadcast hrtimer based clock event device

2014-05-29 Thread Preeti U Murthy
kipping the last paragraph as it is not conveying anything in specific. The fact that a clock device with the highest rating will be chosen is already known and need not be mentioned explicitly IMHO. > > Cc: Preeti U Murthy > Cc: Will Deacon > Acked-by: Mark Rutland > Signed-off-by: Lorenz

Re: [PATCH] arm64: kernel: initialize broadcast hrtimer based clock event device

2014-05-29 Thread Preeti U Murthy
not be mentioned explicitly IMHO. Cc: Preeti U Murthy pre...@linux.vnet.ibm.com Cc: Will Deacon will.dea...@arm.com Acked-by: Mark Rutland mark.rutl...@arm.com Signed-off-by: Lorenzo Pieralisi lorenzo.pieral...@arm.com --- arch/arm64/kernel/time.c | 3 +++ 1 file changed, 3 insertions

[PATCH 2/6] powerpc, powernv, CPU hotplug: Put offline CPUs in Fast-Sleep instead of Nap

2014-05-27 Thread Preeti U Murthy
From: Srivatsa S. Bhat The offline cpus are put to fast sleep if the idle state is discovered in the device tree. This is to gain maximum powersavings in the offline state. Signed-off-by: Srivatsa S. Bhat [ Changelog added by ] Signed-off-by: Preeti U Murthy --- arch/powerpc/include/asm

[PATCH 5/6] KVM: PPC: Book3S HV: Put KVM standby hwthreads to fast-sleep instead of nap

2014-05-27 Thread Preeti U Murthy
-by: Preeti U Murthy --- arch/powerpc/kvm/book3s_hv_rmhandlers.S | 73 --- 1 file changed, 65 insertions(+), 8 deletions(-) diff --git a/arch/powerpc/kvm/book3s_hv_rmhandlers.S b/arch/powerpc/kvm/book3s_hv_rmhandlers.S index 43aa806..69244cc 100644 --- a/arch

[PATCH 4/6] KVM: PPC: Book3S HV: Consolidate the idle-state enter sequence in KVM

2014-05-27 Thread Preeti U Murthy
as well when an entire cpu core is idle. As a precursor, consolidate the code common across all idle states. Signed-off-by: Srivatsa S. Bhat [ Changelog added by ] Signed-off-by: Preeti U Murthy --- arch/powerpc/kvm/book3s_hv_rmhandlers.S | 30 -- 1 file changed, 16

[PATCH 6/6] ppc, book3s: Go back to same idle state after handling machine check interrupt

2014-05-27 Thread Preeti U Murthy
by default. Signed-off-by: Srivatsa S. Bhat [ Changelog added by ] Signed-off-by: Preeti U Murthy --- arch/powerpc/kernel/exceptions-64s.S | 21 +++-- arch/powerpc/kernel/idle_power7.S|2 +- 2 files changed, 16 insertions(+), 7 deletions(-) diff --git a/arch/powerpc/kernel

[PATCH 3/6] KVM: PPC: Book3S HV: Enable CPUs to run guest after waking up from fast-sleep

2014-05-27 Thread Preeti U Murthy
wakeup path as well. Signed-off-by: Srivatsa S. Bhat [ Changelog added by ] Signed-off-by: Preeti U Murthy --- arch/powerpc/kernel/exceptions-64s.S | 30 +++--- 1 file changed, 15 insertions(+), 15 deletions(-) diff --git a/arch/powerpc/kernel/exceptions-64s.S b/arch

[PATCH 1/6] powernv, cpuidle: Move the flags used for idle state discovery to powernv core

2014-05-27 Thread Preeti U Murthy
by ] Signed-off-by: Preeti U Murthy --- arch/powerpc/include/asm/processor.h |4 drivers/cpuidle/cpuidle-powernv.c|7 +++ 2 files changed, 7 insertions(+), 4 deletions(-) diff --git a/arch/powerpc/include/asm/processor.h b/arch/powerpc/include/asm/processor.h index d660dc3

[PATCH 0/6] ppc, kvm, cpuidle: Allow offline and kvm standby threads to enter fastsleep

2014-05-27 Thread Preeti U Murthy
Fast sleep is a deep idle state on Power8. The support for the state was added in commit 0d94873011. Today the idle threads in the host can potentially be put to fast sleep. But when we launch guests using kvm, the secondary threads are required to be offline and the offline threads are put to

[PATCH 1/6] powernv, cpuidle: Move the flags used for idle state discovery to powernv core

2014-05-27 Thread Preeti U Murthy
S. Bhat srivatsa.b...@linux.vnet.ibm.com [ Changelog added by pre...@linux.vnet.ibm.com ] Signed-off-by: Preeti U Murthy pre...@linux.vnet.ibm.com --- arch/powerpc/include/asm/processor.h |4 drivers/cpuidle/cpuidle-powernv.c|7 +++ 2 files changed, 7 insertions(+), 4

[PATCH 0/6] ppc, kvm, cpuidle: Allow offline and kvm standby threads to enter fastsleep

2014-05-27 Thread Preeti U Murthy
Fast sleep is a deep idle state on Power8. The support for the state was added in commit 0d94873011. Today the idle threads in the host can potentially be put to fast sleep. But when we launch guests using kvm, the secondary threads are required to be offline and the offline threads are put to

[PATCH 3/6] KVM: PPC: Book3S HV: Enable CPUs to run guest after waking up from fast-sleep

2014-05-27 Thread Preeti U Murthy
, add this check in the fastsleep wakeup path as well. Signed-off-by: Srivatsa S. Bhat srivatsa.b...@linux.vnet.ibm.com [ Changelog added by pre...@linux.vnet.ibm.com ] Signed-off-by: Preeti U Murthy pre...@linux.vnet.ibm.com --- arch/powerpc/kernel/exceptions-64s.S | 30

[PATCH 5/6] KVM: PPC: Book3S HV: Put KVM standby hwthreads to fast-sleep instead of nap

2014-05-27 Thread Preeti U Murthy
srivatsa.b...@linux.vnet.ibm.com [ Changelog added by pre...@linux.vnet.ibm.com ] Signed-off-by: Preeti U Murthy pre...@linux.vnet.ibm.com --- arch/powerpc/kvm/book3s_hv_rmhandlers.S | 73 --- 1 file changed, 65 insertions(+), 8 deletions(-) diff --git a/arch/powerpc

[PATCH 4/6] KVM: PPC: Book3S HV: Consolidate the idle-state enter sequence in KVM

2014-05-27 Thread Preeti U Murthy
power savings in a KVM scenario as well when an entire cpu core is idle. As a precursor, consolidate the code common across all idle states. Signed-off-by: Srivatsa S. Bhat srivatsa.b...@linux.vnet.ibm.com [ Changelog added by pre...@linux.vnet.ibm.com ] Signed-off-by: Preeti U Murthy pre

[PATCH 6/6] ppc, book3s: Go back to same idle state after handling machine check interrupt

2014-05-27 Thread Preeti U Murthy
. Today they go back to nap by default. Signed-off-by: Srivatsa S. Bhat srivatsa.b...@linux.vnet.ibm.com [ Changelog added by pre...@linux.vnet.ibm.com ] Signed-off-by: Preeti U Murthy pre...@linux.vnet.ibm.com --- arch/powerpc/kernel/exceptions-64s.S | 21 +++-- arch/powerpc/kernel

[PATCH 2/6] powerpc, powernv, CPU hotplug: Put offline CPUs in Fast-Sleep instead of Nap

2014-05-27 Thread Preeti U Murthy
...@linux.vnet.ibm.com ] Signed-off-by: Preeti U Murthy pre...@linux.vnet.ibm.com --- arch/powerpc/include/asm/processor.h |8 + arch/powerpc/kernel/idle.c | 52 ++ arch/powerpc/platforms/powernv/smp.c | 12 +++- 3 files changed, 71

Re: [PATCH v2 00/11] sched: consolidation of cpu_power

2014-05-26 Thread Preeti U Murthy
on a 6 core, SMT 8 machine. Let me dig this further. Let me dig further. Regards Preeti U Murthy > > ebizzy -t N -S 20 > Quad cores > N tip +patchset > 1 100.00% (+/- 0.30%) 97.00% (+/- 0.42%) > 2 100.00% (+/- 0.80%) 100.48% (+/- 0.88%) > 4 100.00% (+/-

Re: [PATCH v2 00/11] sched: consolidation of cpu_power

2014-05-26 Thread Preeti U Murthy
-44.5747 24 -51.9792 28 -34.1863 32 -38.4029 38 -22.2490 42 -7.4843 47 -0.69676 Let me profile it and check where the cause of this degradation is. Regards Preeti U Murthy -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/

Re: [PATCH v2 05/11] ARM: topology: use new cpu_power interface

2014-05-26 Thread Preeti U Murthy
On 05/26/2014 01:55 PM, Vincent Guittot wrote: > On 25 May 2014 15:22, Preeti U Murthy wrote: >> Hi Vincent, >> >> Why do we have two interfaces arch_scale_freq_power() and >> arch_scale_cpu_power()? Does it make sense to consolidate them now ? > Hi Preeti, > &

Re: [PATCH v2 01/11] sched: fix imbalance flag reset

2014-05-26 Thread Preeti U Murthy
On 05/26/2014 01:19 PM, Vincent Guittot wrote: > On 25 May 2014 12:33, Preeti U Murthy wrote: >> Hi Vincent, >> >> On 05/23/2014 09:22 PM, Vincent Guittot wrote: >>> The imbalance flag can stay set whereas there is no imbalance. >>> >>> Let assu

Re: [PATCH v2 01/11] sched: fix imbalance flag reset

2014-05-26 Thread Preeti U Murthy
On 05/26/2014 01:19 PM, Vincent Guittot wrote: On 25 May 2014 12:33, Preeti U Murthy pre...@linux.vnet.ibm.com wrote: Hi Vincent, On 05/23/2014 09:22 PM, Vincent Guittot wrote: The imbalance flag can stay set whereas there is no imbalance. Let assume that we have 3 tasks that run on a dual

Re: [PATCH v2 05/11] ARM: topology: use new cpu_power interface

2014-05-26 Thread Preeti U Murthy
On 05/26/2014 01:55 PM, Vincent Guittot wrote: On 25 May 2014 15:22, Preeti U Murthy pre...@linux.vnet.ibm.com wrote: Hi Vincent, Why do we have two interfaces arch_scale_freq_power() and arch_scale_cpu_power()? Does it make sense to consolidate them now ? Hi Preeti, They don't have

Re: [PATCH v2 00/11] sched: consolidation of cpu_power

2014-05-26 Thread Preeti U Murthy
12 -29.5070 16 -38.4842 20 -44.5747 24 -51.9792 28 -34.1863 32 -38.4029 38 -22.2490 42 -7.4843 47 -0.69676 Let me profile it and check where the cause of this degradation is. Regards Preeti U Murthy -- To unsubscribe from

Re: [PATCH v2 00/11] sched: consolidation of cpu_power

2014-05-26 Thread Preeti U Murthy
. Let me dig this further. Let me dig further. Regards Preeti U Murthy ebizzy -t N -S 20 Quad cores N tip +patchset 1 100.00% (+/- 0.30%) 97.00% (+/- 0.42%) 2 100.00% (+/- 0.80%) 100.48% (+/- 0.88%) 4 100.00% (+/- 1.18%) 99.32% (+/- 1.05%) 6 100.00% (+/- 8.54

Re: [PATCH v2 05/11] ARM: topology: use new cpu_power interface

2014-05-25 Thread Preeti U Murthy
Hi Vincent, Why do we have two interfaces arch_scale_freq_power() and arch_scale_cpu_power()? Does it make sense to consolidate them now ? Regards Preeti U Murthy On 05/23/2014 09:22 PM, Vincent Guittot wrote: > Use the new arch_scale_cpu_power in order to reflect the original capac

Re: [PATCH v2 01/11] sched: fix imbalance flag reset

2014-05-25 Thread Preeti U Murthy
g to useless active load balance > between the idle CPU and the busy CPU. Why do we do active balancing today when there is at-most 1 task on the busiest cpu? Shouldn't we be skipping load balancing altogether? If we do active balancing when the number of tasks = 1, it will lead t

Re: [PATCH v2 01/11] sched: fix imbalance flag reset

2014-05-25 Thread Preeti U Murthy
and the busy CPU. Why do we do active balancing today when there is at-most 1 task on the busiest cpu? Shouldn't we be skipping load balancing altogether? If we do active balancing when the number of tasks = 1, it will lead to a ping pong right? Regards Preeti U Murthy -- To unsubscribe from

Re: [PATCH v2 05/11] ARM: topology: use new cpu_power interface

2014-05-25 Thread Preeti U Murthy
Hi Vincent, Why do we have two interfaces arch_scale_freq_power() and arch_scale_cpu_power()? Does it make sense to consolidate them now ? Regards Preeti U Murthy On 05/23/2014 09:22 PM, Vincent Guittot wrote: Use the new arch_scale_cpu_power in order to reflect the original capacity

Re: [PATCH] sched: fix exec_start/task_hot on migrated tasks

2014-05-19 Thread Preeti U Murthy
hot on that rq, even though it hasn't yet ran there, so you'd have to do > something like: rq_clock_task(dst_rq) - sysctl_sched_migration_cost. > > But seeing as how that is far more work, and all this is heuristics > anyhow and an extra fail term of 1/585 years is far below the current > fail rate, all

Re: [PATCH] sched: fix exec_start/task_hot on migrated tasks

2014-05-19 Thread Preeti U Murthy
that is far more work, and all this is heuristics anyhow and an extra fail term of 1/585 years is far below the current fail rate, all is well. Ok now I understand this.Thanks! Reviewed-by: Preeti U Murthy pre...@linux.vnet.ibm.com -- To unsubscribe from this list: send the line unsubscribe linux

Re: [PATCH 1/2] hrtimer: reprogram event for expires=KTIME_MAX in hrtimer_force_reprogram()

2014-05-12 Thread Preeti U Murthy
On 05/12/2014 11:23 AM, Viresh Kumar wrote: > On 10 May 2014 21:47, Preeti U Murthy wrote: >> On 05/09/2014 04:27 PM, Viresh Kumar wrote: >>> On 9 May 2014 16:04, Preeti U Murthy wrote: > >>> Ideally, the device should have stopped events as we programmed it in >

Re: [PATCH 1/2] hrtimer: reprogram event for expires=KTIME_MAX in hrtimer_force_reprogram()

2014-05-12 Thread Preeti U Murthy
On 05/12/2014 11:23 AM, Viresh Kumar wrote: On 10 May 2014 21:47, Preeti U Murthy pre...@linux.vnet.ibm.com wrote: On 05/09/2014 04:27 PM, Viresh Kumar wrote: On 9 May 2014 16:04, Preeti U Murthy pre...@linux.vnet.ibm.com wrote: Ideally, the device should have stopped events as we programmed

Re: [PATCH 1/2] hrtimer: reprogram event for expires=KTIME_MAX in hrtimer_force_reprogram()

2014-05-10 Thread Preeti U Murthy
On 05/09/2014 04:27 PM, Viresh Kumar wrote: > On 9 May 2014 16:04, Preeti U Murthy wrote: >> On 05/09/2014 02:10 PM, Viresh Kumar wrote: > >> I looked through the code in arm_arch_timer.c and I think the more >> fundamental problem lies in the timer handler there. Ideall

Re: [PATCH 1/2] hrtimer: reprogram event for expires=KTIME_MAX in hrtimer_force_reprogram()

2014-05-10 Thread Preeti U Murthy
On 05/09/2014 04:27 PM, Viresh Kumar wrote: On 9 May 2014 16:04, Preeti U Murthy pre...@linux.vnet.ibm.com wrote: On 05/09/2014 02:10 PM, Viresh Kumar wrote: I looked through the code in arm_arch_timer.c and I think the more fundamental problem lies in the timer handler there. Ideally even

Re: [PATCH 1/2] hrtimer: reprogram event for expires=KTIME_MAX in hrtimer_force_reprogram()

2014-05-09 Thread Preeti U Murthy
ling the timer interrupt event handler. Regards Preeti U Murthy > > Signed-off-by: Viresh Kumar > --- > kernel/hrtimer.c | 3 +-- > 1 file changed, 1 insertion(+), 2 deletions(-) > > diff --git a/kernel/hrtimer.c b/kernel/hrtimer.c > index 6b715c0..b21085c 100

Re: [PATCH 1/2] hrtimer: reprogram event for expires=KTIME_MAX in hrtimer_force_reprogram()

2014-05-09 Thread Preeti U Murthy
already do that. Hence I don't think we should take a drastic measure as to shutdown the clock device in case of no pending timers, My suggestion is as pointed above to set the tick device to a KTIME_MAX equivalent before calling the timer interrupt event handler. Regards Preeti U Murthy

Re: [RESEND PATCH V5 0/8] remove cpu_load idx

2014-05-08 Thread Preeti U Murthy
crap. I agree its not meant for balancing. My point was that since its inaccurate why don't we correct it. But if your argument is that we can live with /proc/loadavg showing a reasonable view of system load then it shouldn't be a problem. Regards Preeti U Murthy > -- To unsubscribe from

Re: [RESEND PATCH V5 0/8] remove cpu_load idx

2014-05-08 Thread Preeti U Murthy
then it shouldn't be a problem. Regards Preeti U Murthy -- To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/

Re: [PATCH RFC/TEST] sched: make sync affine wakeups work

2014-05-05 Thread Preeti U Murthy
On 05/05/2014 10:20 AM, Preeti U Murthy wrote: > On 05/04/2014 06:11 PM, Rik van Riel wrote: >> On 05/04/2014 07:44 AM, Preeti Murthy wrote: >>> Hi Rik, Mike >>> >>> On Fri, May 2, 2014 at 12:00 PM, Rik van Riel wrote: >>>> On 05/02/2014 02:13 AM

Re: [PATCH RFC/TEST] sched: make sync affine wakeups work

2014-05-05 Thread Preeti U Murthy
On 05/05/2014 10:20 AM, Preeti U Murthy wrote: On 05/04/2014 06:11 PM, Rik van Riel wrote: On 05/04/2014 07:44 AM, Preeti Murthy wrote: Hi Rik, Mike On Fri, May 2, 2014 at 12:00 PM, Rik van Riel r...@redhat.com wrote: On 05/02/2014 02:13 AM, Mike Galbraith wrote: On Fri, 2014-05-02 at 00:42

[PATCH] powerpc: Fix comment around arch specific definition of RECLAIM_DISTANCE

2014-05-04 Thread Preeti U Murthy
Commit 32e45ff43eaf5c17f changed the default value of RECLAIM_DISTANCE to 30. However the comment around arch specifc definition of RECLAIM_DISTANCE is not updated to reflect the same. Correct the value mentioned in the comment. Signed-off-by: Preeti U Murthy Cc: Anton Blanchard Cc: Benjamin

Re: [PATCH RFC/TEST] sched: make sync affine wakeups work

2014-05-04 Thread Preeti U Murthy
e. We ourselves are saying in sd_local_flags() that this specific domain is fit for wake affine balance. So naturally the logic in wake_affine and select_idle_sibling() will follow. My point is the peripheral code is seeing the negative affect of these two functions because they pushed themselve

Re: [PATCH RFC/TEST] sched: make sync affine wakeups work

2014-05-04 Thread Preeti U Murthy
*these functions* are affecting NUMA placements. > > Depends on how far away node yonder is I suppose. > > static inline int sd_local_flags(int level) > { > if (sched_domains_numa_distance[level] > RECLAIM_DISTANCE) > return 0; > > re

Re: [PATCH RFC/TEST] sched: make sync affine wakeups work

2014-05-04 Thread Preeti U Murthy
; } Hmm thanks Mike, I totally missed this! Regards Preeti U Murthy -- To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http

Re: [PATCH RFC/TEST] sched: make sync affine wakeups work

2014-05-04 Thread Preeti U Murthy
not tell us the basis on which this value was set to a default of 30. Maybe this needs re-thought? Regards Preeti U Murthy -- To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org

[PATCH] powerpc: Fix comment around arch specific definition of RECLAIM_DISTANCE

2014-05-04 Thread Preeti U Murthy
Commit 32e45ff43eaf5c17f changed the default value of RECLAIM_DISTANCE to 30. However the comment around arch specifc definition of RECLAIM_DISTANCE is not updated to reflect the same. Correct the value mentioned in the comment. Signed-off-by: Preeti U Murthy pre...@linux.vnet.ibm.com Cc: Anton

Re: [PATCH 1/3] sched, balancing: Update rq->max_idle_balance_cost whenever newidle balance is attempted

2014-04-28 Thread Preeti U Murthy
an that it's > actually due for a periodic balance and we wouldn't need to modify it? > In rebalance_domains(), we do load_balance if time_after_eq(jiffies, > sd->last_balance + interval). Right. So I missed the point that we don't really have a problem with the rq->next_ba

Re: [PATCH 1/3] sched, balancing: Update rq->max_idle_balance_cost whenever newidle balance is attempted

2014-04-28 Thread Preeti U Murthy
. > Also, note that the value we set rq->next_balance to might itself > already be expired. There is no guarantee that last_balance + interval > is in the future. > Hmm this makes sense. Thanks! Regards Preeti U Murthy -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/

Re: [PATCH 1/3] sched, balancing: Update rq-max_idle_balance_cost whenever newidle balance is attempted

2014-04-28 Thread Preeti U Murthy
didn't do that is that nothing else does that either. Also, note that the value we set rq-next_balance to might itself already be expired. There is no guarantee that last_balance + interval is in the future. Hmm this makes sense. Thanks! Regards Preeti U Murthy -- To unsubscribe from this list

Re: [PATCH 1/3] sched, balancing: Update rq-max_idle_balance_cost whenever newidle balance is attempted

2014-04-28 Thread Preeti U Murthy
with the rq-next_balance being expired. It will anyway ensure that in the next call to rebalance_domains() load balancing will be done and that is all we want. Thanks for pointing it out. Regards Preeti U Murthy Besides this: Reviewed-by: Preeti U Murthy pre...@linux.vnet.ibm.com Thanks

Re: [PATCH 2/3] sched: Initialize newidle balance stats in sd_numa_init()

2014-04-25 Thread Preeti U Murthy
.last_balance = jiffies, > .balance_interval = sd_weight, > + .max_newidle_lb_cost= 0, > + .next_decay_max_lb_cost = jiffies, > }; > SD_INIT_NAME(sd, NUMA); > sd->private = >data; > Reviewed-by: Pre

Re: [PATCH 2/3] sched: Initialize newidle balance stats in sd_numa_init()

2014-04-25 Thread Preeti U Murthy
, .balance_interval = sd_weight, + .max_newidle_lb_cost= 0, + .next_decay_max_lb_cost = jiffies, }; SD_INIT_NAME(sd, NUMA); sd-private = tl-data; Reviewed-by: Preeti U Murthy pre...@linux.vnet.ibm.com -- To unsubscribe from this list: send

Re: [PATCH 1/3] sched, balancing: Update rq->max_idle_balance_cost whenever newidle balance is attempted

2014-04-24 Thread Preeti U Murthy
f the CPU is in the process > of going idle (!pulled_task in idle_balance()), we can reset the > rq->next_balance based on the interval = 1 ms, as oppose to > having it remain up to 64 ms later (in idle_balance(), interval > doesn't get multiplied by sd->busy_factor). I agree with this

Re: [PATCH 1/3] sched, balancing: Update rq->max_idle_balance_cost whenever newidle balance is attempted

2014-04-24 Thread Preeti U Murthy
tting rq->next_balance? And if we should, then the dependence on pulled_tasks is not justified is it? All this assuming that rq->next_balance should always reflect the minimum value of sd->next_balance among the sched domains of which the rq is a part. Regards Preeti U Murthy &g

Re: [PATCH 1/3] sched, balancing: Update rq->max_idle_balance_cost whenever newidle balance is attempted

2014-04-24 Thread Preeti U Murthy
ies, this_rq->next_balance)) { /* * We are going idle. next_balance may be set based on * a busy processor. So reset next_balance. */ this_rq->next_balance = next_balance; } Also the comment in the above

Re: [PATCH 1/3] sched, balancing: Update rq-max_idle_balance_cost whenever newidle balance is attempted

2014-04-24 Thread Preeti U Murthy
on * a busy processor. So reset next_balance. */ this_rq-next_balance = next_balance; } Also the comment in the above snippet does not look right to me. It says we are going idle but the condition checks for pulled_task. Regards Preeti U Murthy

Re: [PATCH 1/3] sched, balancing: Update rq-max_idle_balance_cost whenever newidle balance is attempted

2014-04-24 Thread Preeti U Murthy
we be resetting rq-next_balance? And if we should, then the dependence on pulled_tasks is not justified is it? All this assuming that rq-next_balance should always reflect the minimum value of sd-next_balance among the sched domains of which the rq is a part. Regards Preeti U Murthy

Re: [PATCH 1/3] sched, balancing: Update rq-max_idle_balance_cost whenever newidle balance is attempted

2014-04-24 Thread Preeti U Murthy
concerned with an additional point that I have mentioned in my reply to Peter's mail on this thread. Should we verify if rq-next_balance update is independent of pulled_tasks? sd-balance_interval is changed during load_balance() and rq-next_balance should perhaps consider that? Regards Preeti U Murthy

Re: [PATCH] PM / suspend: Make cpuidle work in the "freeze" state

2014-04-21 Thread Preeti U Murthy
nclude > #include > #include > @@ -53,7 +54,9 @@ static void freeze_begin(void) > > static void freeze_enter(void) > { > + cpuidle_resume(); > wait_event(suspend_freeze_wait_head, suspend_freeze_wake); > + cpuidle_pause(); > } > > void freez

Re: [PATCH] PM / suspend: Make cpuidle work in the freeze state

2014-04-21 Thread Preeti U Murthy
) Reviewed-by: Preeti U Murthy pre...@linux.vnet.ibm.com -- To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/

[PATCH 0/3] ppc:Set runlatch bits correctly for offline threads and vcpus

2014-04-11 Thread Preeti U Murthy
is aimed at ensuring that the runlatch bits are consisten with the utilization of a CPU under all circumstances. --- Preeti U Murthy (3): ppc/powernv: Set the runlatch bits correctly for offline cpus ppc/kvm: Set the runlatch bit of a CPU just before starting guest ppc/kvm: Clear

[PATCH 1/3] ppc/powernv: Set the runlatch bits correctly for offline cpus

2014-04-11 Thread Preeti U Murthy
to be cleared to indicate an unused CPU. Hence this patch has the runlatch bit cleared for an offline CPU just before entering an idle state and sets it immediately after it exits the idle state. Signed-off-by: Preeti U Murthy Acked-by: Paul Mackerras Reviewed-by: Srivatsa S. Bhat --- arch

[PATCH 3/3] ppc/kvm: Clear the runlatch bit of a vcpu before napping

2014-04-11 Thread Preeti U Murthy
When the guest cedes the vcpu or the vcpu has no guest to run it naps. Clear the runlatch bit of the vcpu before napping to indicate an idle cpu. Signed-off-by: Preeti U Murthy Acked-by: Paul Mackerras Reviewed-by: Srivatsa S. Bhat --- arch/powerpc/kvm/book3s_hv_rmhandlers.S | 12

[PATCH 2/3] ppc/kvm: Set the runlatch bit of a CPU just before starting guest

2014-04-11 Thread Preeti U Murthy
bits need to be set to indicate that they are busy. The primary thread has its runlatch bit set though, but there is no harm in setting this bit once again. Hence set the runlatch bit for all threads before they start guest. Signed-off-by: Preeti U Murthy Acked-by: Paul Mackerras Reviewed-by:

[PATCH 0/3] ppc:Set runlatch bits correctly for offline threads and vcpus

2014-04-11 Thread Preeti U Murthy
is aimed at ensuring that the runlatch bits are consisten with the utilization of a CPU under all circumstances. --- Preeti U Murthy (3): ppc/powernv: Set the runlatch bits correctly for offline cpus ppc/kvm: Set the runlatch bit of a CPU just before starting guest ppc/kvm: Clear

[PATCH 0/3] ppc:Set runlatch bits correctly for offline threads and vcpus

2014-04-11 Thread Preeti U Murthy
is aimed at ensuring that the runlatch bits are consisten with the utilization of a CPU under all circumstances. --- Preeti U Murthy (3): ppc/powernv: Set the runlatch bits correctly for offline cpus ppc/kvm: Set the runlatch bit of a CPU just before starting guest ppc/kvm: Clear

[PATCH 3/3] ppc/kvm: Clear the runlatch bit of a vcpu before napping

2014-04-11 Thread Preeti U Murthy
When the guest cedes the vcpu or the vcpu has no guest to run it naps. Clear the runlatch bit of the vcpu before napping to indicate an idle cpu. Signed-off-by: Preeti U Murthy pre...@linux.vnet.ibm.com Acked-by: Paul Mackerras pau...@samba.org Reviewed-by: Srivatsa S. Bhat srivatsa.b

[PATCH 2/3] ppc/kvm: Set the runlatch bit of a CPU just before starting guest

2014-04-11 Thread Preeti U Murthy
to be set to indicate that they are busy. The primary thread has its runlatch bit set though, but there is no harm in setting this bit once again. Hence set the runlatch bit for all threads before they start guest. Signed-off-by: Preeti U Murthy pre...@linux.vnet.ibm.com Acked-by: Paul Mackerras pau

[PATCH 1/3] ppc/powernv: Set the runlatch bits correctly for offline cpus

2014-04-11 Thread Preeti U Murthy
to be cleared to indicate an unused CPU. Hence this patch has the runlatch bit cleared for an offline CPU just before entering an idle state and sets it immediately after it exits the idle state. Signed-off-by: Preeti U Murthy pre...@linux.vnet.ibm.com Acked-by: Paul Mackerras pau...@samba.org

[PATCH 0/3] ppc:Set runlatch bits correctly for offline threads and vcpus

2014-04-11 Thread Preeti U Murthy
is aimed at ensuring that the runlatch bits are consisten with the utilization of a CPU under all circumstances. --- Preeti U Murthy (3): ppc/powernv: Set the runlatch bits correctly for offline cpus ppc/kvm: Set the runlatch bit of a CPU just before starting guest ppc/kvm: Clear

Re: [PATCH] tick, broadcast: Prevent false alarm when force mask contains offline cpus

2014-04-09 Thread Preeti U Murthy
Hi Thomas, Any comments on this patch? Regards Preeti U Murthy On 04/01/2014 11:02 AM, Preeti U Murthy wrote: > On 03/28/2014 02:17 PM, Srivatsa S. Bhat wrote: >> On 03/27/2014 03:44 PM, Preeti U Murthy wrote: >>> On 03/27/2014 11:58 AM, Srivatsa S. Bhat wrote: >>>>

Re: [PATCH] tick, broadcast: Prevent false alarm when force mask contains offline cpus

2014-04-09 Thread Preeti U Murthy
Hi Thomas, Any comments on this patch? Regards Preeti U Murthy On 04/01/2014 11:02 AM, Preeti U Murthy wrote: On 03/28/2014 02:17 PM, Srivatsa S. Bhat wrote: On 03/27/2014 03:44 PM, Preeti U Murthy wrote: On 03/27/2014 11:58 AM, Srivatsa S. Bhat wrote: Actually, my suggestion was to remove

<    1   2   3   4   5   6   7   8   9   10   >