Hi Raghu,
On 10/08/2014 08:24 PM, Raghavendra KT wrote:
> On Wed, Oct 8, 2014 at 12:37 PM, Preeti U Murthy
> wrote:
>> There are two masks associated with cpusets. The cpus/mems_allowed
>> and effective_cpus/mems. On the legacy hierarchy both these masks
>> are co
Hi Peter,
On 10/08/2014 01:37 PM, Peter Zijlstra wrote:
> On Wed, Oct 08, 2014 at 12:37:40PM +0530, Preeti U Murthy wrote:
>> There are two masks associated with cpusets. The cpus/mems_allowed
>> and effective_cpus/mems. On the legacy hierarchy both these masks
>> are consi
a comment
which assumes that cpuset masks are changed only during a hot-unplug operation.
With this patch it is ensured that cpuset masks are consistent with online cpus
in both default and legacy hierarchy.
Signed-off-by: Preeti U Murthy
---
kernel/cpuset.c | 38
a comment
which assumes that cpuset masks are changed only during a hot-unplug operation.
With this patch it is ensured that cpuset masks are consistent with online cpus
in both default and legacy hierarchy.
Signed-off-by: Preeti U Murthy pre...@linux.vnet.ibm.com
---
kernel/cpuset.c | 38
Hi Peter,
On 10/08/2014 01:37 PM, Peter Zijlstra wrote:
On Wed, Oct 08, 2014 at 12:37:40PM +0530, Preeti U Murthy wrote:
There are two masks associated with cpusets. The cpus/mems_allowed
and effective_cpus/mems. On the legacy hierarchy both these masks
are consistent with each other
Hi Raghu,
On 10/08/2014 08:24 PM, Raghavendra KT wrote:
On Wed, Oct 8, 2014 at 12:37 PM, Preeti U Murthy
pre...@linux.vnet.ibm.com wrote:
There are two masks associated with cpusets. The cpus/mems_allowed
and effective_cpus/mems. On the legacy hierarchy both these masks
are consistent
& SD_SHARE_CPUCAPACITY) {
> + sd->flags |= SD_PREFER_SIBLING;
> sd->imbalance_pct = 110;
> sd->smt_gain = 1178; /* ~15% */
>
Reviewed-by: Preeti U. Murthy
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel&qu
|= SD_PREFER_SIBLING;
sd-imbalance_pct = 110;
sd-smt_gain = 1178; /* ~15% */
Reviewed-by: Preeti U. Murthy pre...@linux.vnet.ibm.com
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More
use it :)
Ok I checked the patch against tip tree and the patch looks good to me.
Reviewed-by; Preeti U Murthy
Thanks!
Regards
Preeti U Murthy
>
> Regards,
> Kirill
>
>> On Fri, Sep 12, 2014 at 4:33 PM, Kirill Tkhai wrote:
>>>
>>> If a task is queued but not
(This is for info, I don't know if it is right to make patches
based on in. But really good if you was away for some time
and you're interested in recent news without lkml archive reading.
I use it :)
Ok I checked the patch against tip tree and the patch looks good to me.
Reviewed-by; Preeti U Murthy
On 09/15/2014 12:29 PM, Michael Ellerman wrote:
> On Fri, 2014-09-12 at 16:31 +0530, Preeti U Murthy wrote:
>> Today the procfs interface /proc/sys/kernel/powersave-nap is used to control
>> entry into deep idle states beyond snooze. Check for the value of this
>> para
On 09/15/2014 12:29 PM, Michael Ellerman wrote:
On Fri, 2014-09-12 at 16:31 +0530, Preeti U Murthy wrote:
Today the procfs interface /proc/sys/kernel/powersave-nap is used to control
entry into deep idle states beyond snooze. Check for the value of this
parameter before entering fastsleep. We
Hi Peter, Vincent,
On 09/03/2014 10:28 PM, Vincent Guittot wrote:
> On 3 September 2014 14:21, Preeti U Murthy wrote:
>> Hi,
>
> Hi Preeti,
>
>>
>> There are places in kernel/sched/fair.c in the load balancing part where
>> rq->nr_running is used as again
Hi Peter, Vincent,
On 09/03/2014 10:28 PM, Vincent Guittot wrote:
On 3 September 2014 14:21, Preeti U Murthy pre...@linux.vnet.ibm.com wrote:
Hi,
Hi Preeti,
There are places in kernel/sched/fair.c in the load balancing part where
rq-nr_running is used as against cfs_rq-nr_running
Today the procfs interface /proc/sys/kernel/powersave-nap is used to control
entry into deep idle states beyond snooze. Check for the value of this
parameter before entering fastsleep. We already do this check for nap in
power7_idle().
Signed-off-by: Preeti U Murthy
---
drivers/cpuidle/cpuidle
Today the procfs interface /proc/sys/kernel/powersave-nap is used to control
entry into deep idle states beyond snooze. Check for the value of this
parameter before entering fastsleep. We already do this check for nap in
power7_idle().
Signed-off-by: Preeti U Murthy pre...@linux.vnet.ibm.com
On 09/10/2014 07:20 PM, Peter Zijlstra wrote:
> On Sat, Aug 30, 2014 at 10:37:40PM +0530, Preeti U Murthy wrote:
>>> - if ((sd->flags & SD_SHARE_CPUCAPACITY) && weight > 1) {
>>> - if (sched_feat(ARCH_CAPACITY))
>>
>>
On 09/10/2014 07:20 PM, Peter Zijlstra wrote:
On Sat, Aug 30, 2014 at 10:37:40PM +0530, Preeti U Murthy wrote:
- if ((sd-flags SD_SHARE_CPUCAPACITY) weight 1) {
- if (sched_feat(ARCH_CAPACITY))
Aren't you missing this check above? I understand that it is not
crucial
On 09/05/2014 05:57 PM, Vincent Guittot wrote:
> On 5 September 2014 14:19, Preeti U Murthy wrote:
>> Hi Vincent,
>>
>> On 09/03/2014 10:28 PM, Vincent Guittot wrote:
>>> On 3 September 2014 14:21, Preeti U Murthy
>>> wrote:
>>>> Hi,
>>
On 09/05/2014 05:57 PM, Vincent Guittot wrote:
On 5 September 2014 14:19, Preeti U Murthy pre...@linux.vnet.ibm.com wrote:
Hi Vincent,
On 09/03/2014 10:28 PM, Vincent Guittot wrote:
On 3 September 2014 14:21, Preeti U Murthy pre...@linux.vnet.ibm.com
wrote:
Hi,
Hi Preeti
Hi Vincent,
On 09/03/2014 10:28 PM, Vincent Guittot wrote:
> On 3 September 2014 14:21, Preeti U Murthy wrote:
>> Hi,
>
> Hi Preeti,
>
>>
>> There are places in kernel/sched/fair.c in the load balancing part where
>> rq->nr_running is used as again
ect right? There are no real time
tasks/interrupts that get generated.
Besides, what is the column that says patchset+irq? What is the irq
accounting patchset that you refer to in your cover letter?
Regards
Preeti U Murthy
--
To unsubscribe from this list: send the line "unsubscribe
urce_load(i, load_idx);
>
> sgs->group_load += load;
> - sgs->sum_nr_running += rq->nr_running;
> + sgs->sum_nr_running += rq->cfs.h_nr_running;
Yes this was one of the concerns I had around the usage of
rq->nr_running. Looks good
arget_index drivers.
The assumption is if the drivers find the GOV_STOP path to be a suitable
way of implementing what they want to do with the freq of the cpu
going offine,they will not implement the stop CPU callback at all.
Signed-off-by: Preeti U Murthy
---
drivers/cpufreq/cpufreq.c |2 +-
gets hotplugged out.
Signed-off-by: Preeti U Murthy
---
drivers/cpufreq/powernv-cpufreq.c |9 +
1 file changed, 9 insertions(+)
diff --git a/drivers/cpufreq/powernv-cpufreq.c
b/drivers/cpufreq/powernv-cpufreq.c
index 379c083..5a628f1 100644
--- a/drivers/cpufreq/powernv-cpufreq.c
() to smp_call_function_single() in
Patch[2/2]
---
Preeti U Murthy (2):
cpufreq: Allow stop CPU callback to be used by all cpufreq drivers
powernv/cpufreq: Set the pstate of the last hotplugged out cpu in
policy->cpus to minimum
drivers/cpufreq/cpufreq.c |2 +-
drivers/cpufreq/powernv-cpufre
On 09/05/2014 12:37 PM, Viresh Kumar wrote:
> On 5 September 2014 12:31, Preeti U Murthy wrote:
>
>> + smp_call_function_any(policy->cpus, set_pstate, _data, 1);
>
> We will surely have a single CPU alive at this point, so should we call
> this function on polic
gets hotplugged out.
Signed-off-by: Preeti U Murthy
---
drivers/cpufreq/powernv-cpufreq.c |9 +
1 file changed, 9 insertions(+)
diff --git a/drivers/cpufreq/powernv-cpufreq.c
b/drivers/cpufreq/powernv-cpufreq.c
index 379c083..7bb988e 100644
--- a/drivers/cpufreq/powernv-cpufreq.c
arget_index drivers.
The assumption is if the drivers find the GOV_STOP path to be a suitable
way of implementing what they want to do with the freq of the cpu
going offine,they will not implement the stop CPU callback at all.
Signed-off-by: Preeti U Murthy
---
drivers/cpufreq/cpufreq.c |2 +-
as compared to nap/fastsleep
and traced the problem to the pstate of the core being kept at a high even
when the core is offline. This can keep the socket pstate high, thus burning
power unnecessarily. This patchset fixes this issue.
---
Preeti U Murthy (2):
cpufreq: Allow stop CPU callback
as compared to nap/fastsleep
and traced the problem to the pstate of the core being kept at a high even
when the core is offline. This can keep the socket pstate high, thus burning
power unnecessarily. This patchset fixes this issue.
---
Preeti U Murthy (2):
cpufreq: Allow stop CPU callback
.
The assumption is if the drivers find the GOV_STOP path to be a suitable
way of implementing what they want to do with the freq of the cpu
going offine,they will not implement the stop CPU callback at all.
Signed-off-by: Preeti U Murthy pre...@linux.vnet.ibm.com
---
drivers/cpufreq/cpufreq.c |2
gets hotplugged out.
Signed-off-by: Preeti U Murthy pre...@linux.vnet.ibm.com
---
drivers/cpufreq/powernv-cpufreq.c |9 +
1 file changed, 9 insertions(+)
diff --git a/drivers/cpufreq/powernv-cpufreq.c
b/drivers/cpufreq/powernv-cpufreq.c
index 379c083..7bb988e 100644
--- a/drivers
On 09/05/2014 12:37 PM, Viresh Kumar wrote:
On 5 September 2014 12:31, Preeti U Murthy pre...@linux.vnet.ibm.com wrote:
+ smp_call_function_any(policy-cpus, set_pstate, freq_data, 1);
We will surely have a single CPU alive at this point, so should we call
this function on policy-cpu
.
The assumption is if the drivers find the GOV_STOP path to be a suitable
way of implementing what they want to do with the freq of the cpu
going offine,they will not implement the stop CPU callback at all.
Signed-off-by: Preeti U Murthy pre...@linux.vnet.ibm.com
---
drivers/cpufreq/cpufreq.c |2
gets hotplugged out.
Signed-off-by: Preeti U Murthy pre...@linux.vnet.ibm.com
---
drivers/cpufreq/powernv-cpufreq.c |9 +
1 file changed, 9 insertions(+)
diff --git a/drivers/cpufreq/powernv-cpufreq.c
b/drivers/cpufreq/powernv-cpufreq.c
index 379c083..5a628f1 100644
--- a/drivers
() to smp_call_function_single() in
Patch[2/2]
---
Preeti U Murthy (2):
cpufreq: Allow stop CPU callback to be used by all cpufreq drivers
powernv/cpufreq: Set the pstate of the last hotplugged out cpu in
policy-cpus to minimum
drivers/cpufreq/cpufreq.c |2 +-
drivers/cpufreq/powernv-cpufreq.c
-sum_nr_running += rq-nr_running;
+ sgs-sum_nr_running += rq-cfs.h_nr_running;
Yes this was one of the concerns I had around the usage of
rq-nr_running. Looks good to me.
if (rq-nr_running 1)
*overload = true;
Reviewed-by: Preeti U Murthy pre
? There are no real time
tasks/interrupts that get generated.
Besides, what is the column that says patchset+irq? What is the irq
accounting patchset that you refer to in your cover letter?
Regards
Preeti U Murthy
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body
Hi Vincent,
On 09/03/2014 10:28 PM, Vincent Guittot wrote:
On 3 September 2014 14:21, Preeti U Murthy pre...@linux.vnet.ibm.com wrote:
Hi,
Hi Preeti,
There are places in kernel/sched/fair.c in the load balancing part where
rq-nr_running is used as against cfs_rq-nr_running. At least I
On 09/04/2014 02:46 PM, Viresh Kumar wrote:
> On 4 September 2014 14:40, Preeti U Murthy wrote:
>> cpufreq: Allow stop CPU callback to be used by all cpufreq drivers
>>
>> Commit 367dc4aa introduced the stop CPU callback for intel_pstate
>> drivers. During
pufreq drivers as long as they have this callback implemented
and irrespective of whether they are set_policy/target_index drivers.
The assumption is if the drivers find the GOV_STOP path to be a suitable
way of implementing what they want to do with the freq of the cpu
going offine
drivers.
The assumption is if the drivers find the GOV_STOP path to be a suitable
way of implementing what they want to do with the freq of the cpu
going offine,they will not implement the stop CPU callback at all.
Signed-off-by: Preeti U Murthy pre...@linux.vnet.ibm.com
diff --git
On 09/04/2014 02:46 PM, Viresh Kumar wrote:
On 4 September 2014 14:40, Preeti U Murthy pre...@linux.vnet.ibm.com wrote:
cpufreq: Allow stop CPU callback to be used by all cpufreq drivers
Commit 367dc4aa introduced the stop CPU callback for intel_pstate
drivers. During the CPU_DOWN_PREPARE
On 09/03/2014 05:14 PM, Vincent Guittot wrote:
> On 3 September 2014 11:11, Preeti U Murthy wrote:
>> On 09/01/2014 02:15 PM, Vincent Guittot wrote:
>>> On 30 August 2014 19:50, Preeti U Murthy wrote:
>>>> Hi Vincent,
>>>>> index 18db43e..6
r is it true that the usage of rq->nr_running in
the above places is incorrect?
Thanks
Regards
Preeti U Murthy
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/ma
On 09/01/2014 02:15 PM, Vincent Guittot wrote:
> On 30 August 2014 19:50, Preeti U Murthy wrote:
>> Hi Vincent,
>>> index 18db43e..60ae1ce 100644
>>> --- a/kernel/sched/fair.c
>>> +++ b/kernel/sched/fair.c
>>> @@ -6049,6 +6049,14 @@ static bool u
On 09/01/2014 01:35 PM, Vincent Guittot wrote:
> On 30 August 2014 19:07, Preeti U Murthy wrote:
>> Hi Vincent,
>>
>> On 08/26/2014 04:36 PM, Vincent Guittot wrote:
>>> capacity_orig is only changed for system with a SMT sched_domain level in
>>> orde
On 09/01/2014 01:35 PM, Vincent Guittot wrote:
On 30 August 2014 19:07, Preeti U Murthy pre...@linux.vnet.ibm.com wrote:
Hi Vincent,
On 08/26/2014 04:36 PM, Vincent Guittot wrote:
capacity_orig is only changed for system with a SMT sched_domain level in
order
I think I had asked
On 09/01/2014 02:15 PM, Vincent Guittot wrote:
On 30 August 2014 19:50, Preeti U Murthy pre...@linux.vnet.ibm.com wrote:
Hi Vincent,
index 18db43e..60ae1ce 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -6049,6 +6049,14 @@ static bool update_sd_pick_busiest(struct lb_env
On 09/03/2014 05:14 PM, Vincent Guittot wrote:
On 3 September 2014 11:11, Preeti U Murthy pre...@linux.vnet.ibm.com wrote:
On 09/01/2014 02:15 PM, Vincent Guittot wrote:
On 30 August 2014 19:50, Preeti U Murthy pre...@linux.vnet.ibm.com wrote:
Hi Vincent,
index 18db43e..60ae1ce 100644
in
the above places is incorrect?
Thanks
Regards
Preeti U Murthy
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
/*
> * We may be recently in ticked or tickless idle mode. At the first
> @@ -7388,38 +7410,45 @@ static inline int nohz_kick_needed(struct rq *rq)
>* balancing.
>*/
> if (likely(!atomic_read(_cpus)))
> - return 0;
> + return f
- unsigned long weight = sd->span_weight;
> unsigned long capacity = SCHED_CAPACITY_SCALE;
> struct sched_group *sdg = sd->groups;
>
> - if ((sd->flags & SD_SHARE_CPUCAPACITY) && weight > 1) {
> - if (sched_feat(ARCH_CAPACITY))
A
more places in
load balancing.
To cite examples: The above check says a cpu is overloaded when
rq->nr_running > 1. However if these tasks happen to be rt tasks, we
would anyway not be able to load balance. So while I was looking through
this patch, I noticed this and wanted to cross verif
, we
would anyway not be able to load balance. So while I was looking through
this patch, I noticed this and wanted to cross verify if we are checking
rq-nr_running on purpose in some places in load balancing; another
example being in nohz_kick_needed().
Regards
Preeti U Murthy
--
To unsubscribe
this check above? I understand that it is not
crucial, but that would also mean removing ARCH_CAPACITY sched_feat
altogether, wouldn't it?
Regards
Preeti U Murthy
- capacity *= arch_scale_smt_capacity(sd, cpu);
- else
- capacity
;
+ return false;
if (time_before(now, nohz.next_balance))
- return 0;
+ return false;
if (rq-nr_running = 2)
Will this check ^^ not catch those cases which this patch is targeting?
Regards
Preeti U Murthy
- goto need_kick
this routine. Either this or
cpufreq_suspend() should be called in the reboot path generically. The
latter might not be an enticing option for other platforms.
Regards
Preeti U Murthy
>
> Now the deal is how do we move to nominal frequency on reboot..
> @Rafael: Any suggestions? H
() should be called in the reboot path generically. The
latter might not be an enticing option for other platforms.
Regards
Preeti U Murthy
Now the deal is how do we move to nominal frequency on reboot..
@Rafael: Any suggestions? How do we ensure that governors
are stopped on these notifiers
On 08/18/2014 09:09 PM, Nicolas Pitre wrote:
> On Mon, 11 Aug 2014, Preeti U Murthy wrote:
>
>> As a first step towards improving the power awareness of the scheduler,
>> this patch enables a "dumb" state where all power management is turned off.
>> Whatever a
On 08/18/2014 09:24 PM, Nicolas Pitre wrote:
> On Mon, 11 Aug 2014, Preeti U Murthy wrote:
>
>> The goal of the power aware scheduling design is to integrate all
>> policy, metrics and averaging into the scheduler. Today the
>> cpu power management is fragmente
On 08/18/2014 09:24 PM, Nicolas Pitre wrote:
On Mon, 11 Aug 2014, Preeti U Murthy wrote:
The goal of the power aware scheduling design is to integrate all
policy, metrics and averaging into the scheduler. Today the
cpu power management is fragmented and hence inconsistent.
As a first step
On 08/18/2014 09:09 PM, Nicolas Pitre wrote:
On Mon, 11 Aug 2014, Preeti U Murthy wrote:
As a first step towards improving the power awareness of the scheduler,
this patch enables a dumb state where all power management is turned off.
Whatever additionally we put into the kernel for cpu
y of load balancing?
Regards
Preeti U Murthy
>
>
> -#define SD_SIBLING_INIT (struct sched_domain) {
> \
> - .min_interval = 1,\
> - .max_interval = 2,
From: Alex Shi
Packing tasks among such domain can't save power, just performance
losing. So no power balance on them.
Signed-off-by: Alex Shi
[Added CONFIG_SCHED_POWER switch to enable this patch]
Signed-off-by: Preeti U Murthy
---
kernel/sched/fair.c |7 ---
1 file changed, 4
this time power friendly
LB chance, do nothing.
With this patch, the worst case for power scheduling -- kbuild, gets
similar performance/watts value among different policy.
Signed-off-by: Alex Shi
[Added CONFIG_SCHED_POWER switch to enable this patch]
Signed-off-by: Preeti U Murthy
.
Signed-off-by: Alex Shi
[Added CONFIG_SCHED_POWER switch to enable this patch]
Signed-off-by: Preeti U Murthy
---
kernel/sched/fair.c | 126 +++
1 file changed, 125 insertions(+), 1 deletion(-)
diff --git a/kernel/sched/fair.c b/kernel/sched
of group_min
Signed-off-by: Alex Shi
Signed-off-by: Preeti U Murthy
---
kernel/sched/fair.c |4
1 file changed, 4 insertions(+)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index fd93eaf..6d40aa3 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -4597,6 +4597,10
at the time, that is the power balance hope so.
Signed-off-by: Alex Shi
[Added CONFIG_SCHED_POWER switch to enable this patch]
Signed-off-by: Preeti U Murthy
---
kernel/sched/fair.c | 51 +--
1 file changed, 49 insertions(+), 2 deletions
switch to enable this patch]
Signed-off-by: Preeti U Murthy
---
kernel/sched/fair.c |8
1 file changed, 8 insertions(+)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index e7a677e..f9b2a21 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -5372,6 +5372,11 @@ enum
cpu in leader group.
Morten Rasmussen catch a type bug. And PeterZ reminder to consider
rt_util. thanks you!
Inspired-by: Vincent Guittot
Signed-off-by: Alex Shi
[Added CONFIG_SCHED_POWER switch to enable this patch]
Signed-off-by: Preeti U Murthy
---
kernel/sched/fair.c |
ded CONFIG_SCHED_POWER switch to enable this patch]
Signed-off-by: Preeti U Murthy
---
kernel/sched/fair.c | 33 ++---
1 file changed, 26 insertions(+), 7 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index e993f1c..3db77e8 100644
--- a/kernel/sched/fai
the busiest while still has
utilization group, if the system is using power aware policy and
has such group.
Signed-off-by: Alex Shi
[Added CONFIG_SCHED_POWER switch to enable this patch]
Signed-off-by: Preeti U Murthy
---
kernel/sched/fair.c | 117
utilization is sum of rt util and cfs
util.
Signed-off-by: Alex Shi
[Added CONFIG_SCHED_POWER switch to enable this patch]
Signed-off-by: Preeti U Murthy
---
kernel/sched/fair.c | 47 +++
1 file changed, 47 insertions(+)
diff --git a/kernel/sched
sed for wakeup burst, set it as
double of sysctl_sched_migration_cost.
Signed-off-by: Alex Shi
[Added CONFIG_SCHED_POWER switch to enable this patch]
Signed-off-by: Preeti U Murthy
---
include/linux/sched/sysctl.h |3 +++
kernel/sched/fair.c |4
kernel/sysct
From: Alex Shi
Power aware fork/exec/wake balancing needs both of structs in incoming
patches. So move ahead before it.
Signed-off-by: Alex Shi
Signed-off-by: Preeti U Murthy
---
kernel/sched/fair.c | 89 ++-
1 file changed, 45 insertions
[Added CONFIG_SCHED_POWER switch to enable this patch]
Signed-off-by: Preeti U Murthy
---
kernel/sched/fair.c |9 +
1 file changed, 9 insertions(+)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 681ad06..3d6d081 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
i
[Added CONFIG_SCHED_POWER switch to enable this patch]
Signed-off-by: Preeti U Murthy
---
kernel/sched/debug.c |3 +++
kernel/sched/fair.c | 15 +++
kernel/sched/sched.h |9 +
3 files changed, 27 insertions(+)
diff --git a/kernel/sched/debug.c b/kernel/sched/debug.c
index 627b3c3
sched balance policy is 'powersaving'.
User can change the policy by commend 'echo':
echo performance >
/sys/devices/system/cpu/sched_balance_policy/current_sched_balance_policy
Signed-off-by: Alex Shi
[Added CONFIG_SCHED_POWER switch to enable this patch]
Signed-off-by: Preeti U Mur
is full, power oriented.
The incoming patches will enable powersaving scheduling in CFS.
Signed-off-by: Alex Shi
[Added CONFIG_SCHED_POWER switch to enable this patch]
Signed-off-by: Preeti U Murthy
---
kernel/sched/fair.c |5 +
kernel/sched/sched.h |7 +++
2 files changed, 12
to fill up the topology
levels with appropriate cpuidle state information while they discover
it themselves.
Signed-off-by: Preeti U Murthy
---
drivers/cpuidle/cpuidle-powernv.c |8
include/linux/sched.h |3 +++
2 files changed, 11 insertions(+)
diff --git a/drivers
as the logic used
by the menu governor. However going ahead the heuristics will be tuned and
improved upon with metrics better known to the scheduler.
Note: cpufrequency is still left disabled when CONFIG_SCHED_POWER is selected.
Signed-off-by: Preeti U Murthy
---
drivers/cpuidle/Kconfig
wersavings.
This will enable us to benchmark and optimize the power aware scheduler
from scratch.If we are to benchmark it against the performance of the
existing design, we will get sufficiently distracted by the performance
numbers and get steered away from a sane design.
Signed-off-by: Preeti U Murthy
ling
sched: add new members of sd_lb_stats
sched: power aware load balance
sched: lazy power balance
sched: don't do power balance on share cpu power domain
Preeti U Murthy (3):
sched/power: Remove cpu idle state selection and cpu frequency tuning
sched/power: Move idle s
balancing?
Regards
Preeti U Murthy
-#define SD_SIBLING_INIT (struct sched_domain) {
\
- .min_interval = 1,\
- .max_interval = 2,\
-#define SD_MC_INIT
sched: power aware load balance
sched: lazy power balance
sched: don't do power balance on share cpu power domain
Preeti U Murthy (3):
sched/power: Remove cpu idle state selection and cpu frequency tuning
sched/power: Move idle state selection into the scheduler
.
This will enable us to benchmark and optimize the power aware scheduler
from scratch.If we are to benchmark it against the performance of the
existing design, we will get sufficiently distracted by the performance
numbers and get steered away from a sane design.
Signed-off-by: Preeti U Murthy pre
as the logic used
by the menu governor. However going ahead the heuristics will be tuned and
improved upon with metrics better known to the scheduler.
Note: cpufrequency is still left disabled when CONFIG_SCHED_POWER is selected.
Signed-off-by: Preeti U Murthy pre...@linux.vnet.ibm.com
to fill up the topology
levels with appropriate cpuidle state information while they discover
it themselves.
Signed-off-by: Preeti U Murthy pre...@linux.vnet.ibm.com
---
drivers/cpuidle/cpuidle-powernv.c |8
include/linux/sched.h |3 +++
2 files changed, 11 insertions
in the
group is full, power oriented.
The incoming patches will enable powersaving scheduling in CFS.
Signed-off-by: Alex Shi alex@intel.com
[Added CONFIG_SCHED_POWER switch to enable this patch]
Signed-off-by: Preeti U Murthy pre...@linux.vnet.ibm.com
---
kernel/sched/fair.c |5
-off-by: Preeti U Murthy pre...@linux.vnet.ibm.com
---
Documentation/ABI/testing/sysfs-devices-system-cpu | 23 +++
kernel/sched/fair.c| 69
2 files changed, 92 insertions(+)
diff --git a/Documentation/ABI/testing/sysfs-devices-system
@intel.com
[Added CONFIG_SCHED_POWER switch to enable this patch]
Signed-off-by: Preeti U Murthy pre...@linux.vnet.ibm.com
---
kernel/sched/debug.c |3 +++
kernel/sched/fair.c | 15 +++
kernel/sched/sched.h |9 +
3 files changed, 27 insertions(+)
diff --git a/kernel
-by: Alex Shi alex@intel.com
[Added CONFIG_SCHED_POWER switch to enable this patch]
Signed-off-by: Preeti U Murthy pre...@linux.vnet.ibm.com
---
kernel/sched/fair.c |9 +
1 file changed, 9 insertions(+)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 681ad06..3d6d081
From: Alex Shi alex@intel.com
Power aware fork/exec/wake balancing needs both of structs in incoming
patches. So move ahead before it.
Signed-off-by: Alex Shi alex@intel.com
Signed-off-by: Preeti U Murthy pre...@linux.vnet.ibm.com
---
kernel/sched/fair.c | 89
.
The 'sysctl_sched_burst_threshold' used for wakeup burst, set it as
double of sysctl_sched_migration_cost.
Signed-off-by: Alex Shi alex@intel.com
[Added CONFIG_SCHED_POWER switch to enable this patch]
Signed-off-by: Preeti U Murthy pre...@linux.vnet.ibm.com
---
include/linux/sched/sysctl.h |3 +++
kernel/sched
. Then the rq utilization is sum of rt util and cfs
util.
Signed-off-by: Alex Shi alex@intel.com
[Added CONFIG_SCHED_POWER switch to enable this patch]
Signed-off-by: Preeti U Murthy pre...@linux.vnet.ibm.com
---
kernel/sched/fair.c | 47 +++
1
a idlest cpu from the busiest while still has
utilization group, if the system is using power aware policy and
has such group.
Signed-off-by: Alex Shi alex@intel.com
[Added CONFIG_SCHED_POWER switch to enable this patch]
Signed-off-by: Preeti U Murthy pre...@linux.vnet.ibm.com
---
kernel/sched
alex@intel.com
[Added CONFIG_SCHED_POWER switch to enable this patch]
Signed-off-by: Preeti U Murthy pre...@linux.vnet.ibm.com
---
kernel/sched/fair.c | 33 ++---
1 file changed, 26 insertions(+), 7 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched
a idlest
cpu in leader group.
Morten Rasmussen catch a type bug. And PeterZ reminder to consider
rt_util. thanks you!
Inspired-by: Vincent Guittot vincent.guit...@linaro.org
Signed-off-by: Alex Shi alex@intel.com
[Added CONFIG_SCHED_POWER switch to enable this patch]
Signed-off-by: Preeti U
401 - 500 of 1248 matches
Mail list logo