k_to_cores(const
> struct cpumask *threads)
>
> static inline int cpu_nr_cores(void)
> {
> - return NR_CPUS >> threads_shift;
> + return nr_cpu_ids >> threads_shift;
> }
Thanks for the patch!
Reviewed-by: Preeti U. Murthy
>
> static inline cpuma
balance_interval can be as
large as 2*sd_weight. This should ensure that load balancing across
large scheduling domains are not carried out too often. nohz_idle_load
balancing may therefore not go through the entire scheduling domain
hierarchy for each CPU. This will cut down on the time too.
Hi Tejun, Peter,
On 10/09/2014 06:36 PM, Tejun Heo wrote:
> On Thu, Oct 09, 2014 at 01:50:52PM +0530, Preeti U Murthy wrote:
>> However what remains to be answered is that the V2 of cgroup design -
>> the default hierarchy, tracks hotplug operations for children cgroups as
>
On 04/02/2015 11:29 AM, Jason Low wrote:
> On Wed, 2015-04-01 at 18:04 +0100, Morten Rasmussen wrote:
>> On Wed, Apr 01, 2015 at 07:49:56AM +0100, Preeti U Murthy wrote:
>
>>> I am sorry I don't quite get this. Can you please elaborate?
>>
>>
On 04/02/2015 04:12 PM, Ingo Molnar wrote:
>
> * Preeti U Murthy wrote:
>
>> It was found when doing a hotplug stress test on POWER, that the machine
>> either hit softlockups or rcu_sched stall warnings. The issue was
>> traced to commit 7cba160ad789a powernv/cp
On 04/02/2015 05:01 PM, Ingo Molnar wrote:
>
> * Preeti U Murthy wrote:
>
>> On 04/02/2015 04:12 PM, Ingo Molnar wrote:
>>>
>>> * Preeti U Murthy wrote:
>>>
>>>> It was found when doing a hotplug stress test on POWER, that the machine
&
put in place a proper
> namespace for all these callbacks, to make them easy to find and
> change: hotplug_cpu__*() or so, which in this case would turn into
> hotplug_cpu__tick_pull() or so?
>
>> That way at least its clear wtf happens when.
>
> Okay. I'll resurrect the fix with
On 04/02/2015 08:00 PM, tip-bot for Preeti U Murthy wrote:
> Commit-ID: 345527b1edce8df719e0884500c76832a18211c3
> Gitweb: http://git.kernel.org/tip/345527b1edce8df719e0884500c76832a18211c3
> Author: Preeti U Murthy
> AuthorDate: Mon, 30 Mar 2015 14:59:19 +0530
> Committe
On 04/02/2015 11:29 AM, Jason Low wrote:
> On Wed, 2015-04-01 at 18:04 +0100, Morten Rasmussen wrote:
>> On Wed, Apr 01, 2015 at 07:49:56AM +0100, Preeti U Murthy wrote:
>
>>> I am sorry I don't quite get this. Can you please elaborate?
>>
>>
On 04/03/2015 04:20 PM, Ingo Molnar wrote:
>
> * Preeti U Murthy wrote:
>
>> On 04/02/2015 08:00 PM, tip-bot for Preeti U Murthy wrote:
>>> Commit-ID: 345527b1edce8df719e0884500c76832a18211c3
>>> Gitweb:
>>> http://git.kernel.org/tip/345527b1e
non-idle.
As an aside it is helpful to point out that the clock event device that is
programmed here is not a per-cpu clock device; it is a
pseudo clock device, used by the broadcast framework alone.
The per-cpu clock device programming never goes through bc_set_next().
Signed-off-by: Preeti U
Hi Wanpeng, Jason,
On 03/27/2015 10:37 AM, Jason Low wrote:
> On Fri, 2015-03-27 at 10:12 +0800, Wanpeng Li wrote:
>> Hi Preeti,
>> On Thu, Mar 26, 2015 at 06:32:44PM +0530, Preeti U Murthy wrote:
>>>
>>> 1. An ILB CPU was chosen from the first numa domain to tri
Hi Morten,
On 03/27/2015 08:08 PM, Morten Rasmussen wrote:
> Hi Preeti,
>
> On Thu, Mar 26, 2015 at 01:02:44PM +0000, Preeti U Murthy wrote:
>> Fix this, by checking if a CPU was woken up to do nohz idle load
>> balancing, before it does load balancing upon itself. This way
need_resched())
> goto end;
>
> for_each_cpu(balance_cpu, nohz.idle_cpus_mask) {
If need_resched() becomes true between this point and the test within
the 'for' loop, you will end up with the original problem again. So the
patch does not completely solve the said problem.
c
> @@ -251,6 +251,12 @@ print_tickdevice(struct seq_file *m, struct tick_device
> *td, int cpu)
> SEQ_printf(m, "\n");
> }
>
> + if (dev->set_state_oneshot_stopped) {
> + SEQ_printf(m, " ones
100644
> --- a/kernel/time/tick-sched.c
> +++ b/kernel/time/tick-sched.c
> @@ -685,6 +685,9 @@ static ktime_t tick_nohz_stop_sched_tick(struct
> tick_sched *ts,
>if (unlikely(expires.tv64 == KTIME_MAX)) {
> if (ts->nohz_mode ==
break;
> } else {
> + /* Switchback to ONESHOT state */
> + if (likely(dev->state ==
> CLOCK_EVT_STATE_ONESHOT_STOPPED))
> + clockevents_set_state(dev,
> CLOCK_EVT_STATE_ONESHOT);
> +
> if (!tick_program_event(
> hrtimer_get_expires(>sched_timer), 0))
> break;
>
Reviewed-by: Preeti U. Murthy
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
efore waking up more cpus and
> instead improve how additional cpus are kicked if they are needed.
It looks more sensible to do this in parallel. The scenario on POWER is
that tasks don't spread out across nodes until 10s of fork. This is
unforgivable and we cannot afford the code to be the way it is
an IPI and
is definitely more complex than this immediate fix.
Fixes:
http://linuxppc.10917.n7.nabble.com/offlining-cpus-breakage-td88619.html
Suggested-by: Thomas Gleixner
Signed-off-by: Preeti U. Murthy
[Changelog drawn from: https://lkml.org/lkml/2015/2/16/213]
---
Change from V1: https
On 03/30/2015 07:15 PM, Vincent Guittot wrote:
> On 26 March 2015 at 14:02, Preeti U Murthy wrote:
>> When a CPU is kicked to do nohz idle balancing, it wakes up to do load
>> balancing on itself, followed by load balancing on behalf of idle CPUs.
>> But it may end up wit
Hi Jason,
On 03/31/2015 12:25 AM, Jason Low wrote:
> Hi Preeti,
>
> I noticed that another commit 4a725627f21d converted the check in
> nohz_kick_needed() from idle_cpu() to rq->idle_balance, causing a
> potentially outdated value to be used if this cpu is able to
work to be done. So there are no redundant wakeups. Hence I see no
problem here.
The ILB CPU is woken up to do the nohz idle balancing, but with this
patch, may end up with no work for itself at the end of
nohz_idle_balance() and return to sleep. That is one wakeup for merely
doing idle load balancing, but thi
On 04/11/2015 02:05 PM, Peter Zijlstra wrote:
> On Fri, Apr 10, 2015 at 07:41:52PM +0530, Preeti U Murthy wrote:
>> The cpus_allowed and mems_allowed masks of a cpuset get overwritten
>> after each hotplug operation on the legacy hierarchy of cgroups so as to
>> remain in sync
ow resolution systems do not need
> accurate time for the expiry and the forwarding because everything
> happens tick aligned.
>
> So for !HIGHRES we have:
>
> static inline ktime_t hrtimer_cb_get_time(struct hrtimer *timer)
> {
> return timer->base->softirq_tim
ut the below patch and share the
results.
Regards
Preeti U Murthy
>
> Something like the following.
>
> ---
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index fdae26e..d636bf7 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
>
On 04/13/2015 12:31 PM, Peter Zijlstra wrote:
> On Sat, Apr 11, 2015 at 10:35:37AM +0200, Peter Zijlstra wrote:
>> On Fri, Apr 10, 2015 at 07:41:52PM +0530, Preeti U Murthy wrote:
>>> The cpus_allowed and mems_allowed masks of a cpuset get overwritten
>>> after each hotp
On 02/26/2015 11:01 AM, Preeti U Murthy wrote:
> On 02/23/2015 11:03 PM, Nicolas Pitre wrote:
>> On Mon, 23 Feb 2015, Nicolas Pitre wrote:
>>
>>> On Mon, 23 Feb 2015, Peter Zijlstra wrote:
>>>
>>>> The reported function that fails: bL_switcher_res
Hi Peter, Ingo, Thomas,
Can you please take a look at the conversation on this thread ?
This fix is urgent.
Regards
Preeti U Murthy
On 03/02/2015 08:26 PM, Peter Zijlstra wrote:
> On Fri, Feb 27, 2015 at 02:19:05PM +0530, Preeti U Murthy wrote:
>> The problem reported in the
On 03/16/2015 08:26 PM, Peter Zijlstra wrote:
> On Thu, Mar 05, 2015 at 10:06:30AM +0530, Preeti U Murthy wrote:
>>
>> On 03/02/2015 08:23 PM, Peter Zijlstra wrote:
>>> On Thu, Feb 26, 2015 at 08:52:02AM +0530, Preeti U Murthy wrote:
>>>> The hrtimer mode of b
On 03/16/2015 08:26 PM, Peter Zijlstra wrote:
> On Thu, Mar 05, 2015 at 10:06:30AM +0530, Preeti U Murthy wrote:
>>
>> On 03/02/2015 08:23 PM, Peter Zijlstra wrote:
>>> On Thu, Feb 26, 2015 at 08:52:02AM +0530, Preeti U Murthy wrote:
>>>> The hrtimer mode of b
On 04/21/2015 05:23 PM, Thomas Gleixner wrote:
> On Mon, 20 Apr 2015, Preeti U Murthy wrote:
>
>> On 04/15/2015 02:38 AM, Thomas Gleixner wrote:
>>>> Now that we have the active_bases field in sync we can use it for
>>
>> This sentence appears a bit ambigu
* params[1] = chip_id
> + * params[2] = throttle_status
> + */
> OPAL_MSG_TYPE_MAX,
> };
Besides the above nit, the patch looks good.
Reviewed-by: Preeti U Murthy
>
>
--
To unsubscribe from th
pr_info("Pmax reduced due to %s on chip %x\n",
> + throttle_reason[reason], (int)chip_id);
> + } else {
> + throttled = false;
> + pr_info("%s on chip %x\n",
> +
On 04/15/2015 10:06 PM, Serge E. Hallyn wrote:
> On Wed, Apr 15, 2015 at 12:18:11PM -0400, Tejun Heo wrote:
>> On Wed, Apr 15, 2015 at 11:15:35AM -0500, Serge E. Hallyn wrote:
>>> The reason would be because it breaks "legacy" software. So that
>>> wou
_start() return value. Open code the
> logic which makes it readable as well.
>
> Signed-off-by: Thomas Gleixner
> Cc: Preeti U Murthy
> ---
> kernel/time/tick-broadcast-hrtimer.c |8 +---
> 1 file changed, 5 insertions(+), 3 deletions(-)
>
> Index
t a/arch/powerpc/platforms/powernv/opal-wrappers.S
> b/arch/powerpc/platforms/powernv/opal-wrappers.S
> index a7ade94..bf15ead 100644
> --- a/arch/powerpc/platforms/powernv/opal-wrappers.S
> +++ b/arch/powerpc/platforms/powernv/opal-wrappers.S
> @@ -283,6 +283,7 @@ OPAL_CALL(opal_sensor_read,
!HIGHRES case its simply a constant.
>>
>>Export the variable, so we can simplify the usage sites.
>>
>>Signed-off-by: Thomas Gleixner
>>---
Reviewed-by: Preeti U Murthy
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the bo
offset updates (clock_was_set()). Have a sequence cache in the
>>hrtimer cpu bases to evaluate whether the offsets must be updated or
>>not. This allows us later to avoid pointless cacheline pollution.
>>
>>Signed-off-by: Thomas Gleixner
>>Cc: John Stultz
Reviewed-
;more active clock bases are available and avoids touching the cache
>>lines of inactive clock bases.
>>
>>Signed-off-by: Thomas Gleixner
>>---
Regards
Preeti U Murthy
>> kernel/time/hrtimer.c | 17 -
>> 1 file changed, 8 insertions(+)
lling state
during suspend, this will cause an issue, won't it? This also gets me
wondering if polling state is an acceptable idle state during suspend,
given that the drivers with ARCH_HAS_CPU_RELAX permit entry into it
during suspend today. I would expect the cpus to be in a hardware
defined idle stat
On 05/27/2015 07:27 PM, Rafael J. Wysocki wrote:
> On Wed, May 27, 2015 at 2:25 PM, Daniel Lezcano
> wrote:
>> On 05/27/2015 01:31 PM, Preeti U Murthy wrote:
>>>
>>> On 05/27/2015 07:06 AM, Rafael J. Wysocki wrote:
>>>>
>>>> From: Rafael J. Wy
E_DRIVER_STATE_START - 1;
> + int i, ret = -ENXIO;
>
> - for (i = CPUIDLE_DRIVER_STATE_START; i < drv->state_count; i++) {
> + for (i = 0; i < drv->state_count; i++) {
> struct cpuidle_state *s = >states[i];
> struct cpuidl
se if (!reason)
> + pr_info("OCC: Chip %u %s\n", (unsigned int)chip_id,
> + throttle_reason[reason]);
> + }
> + return 0;
> +}
> +
> +static struct notifier_block powernv_cpufreq_opal_nb = {
> + .notifier_call
k if Psafe_mode_active is set in PMSR. */
> next:
> - pmsr_lp = (s8)PMSR_LP(pmsr);
> - if ((pmsr_lp < powernv_pstate_info.min) ||
> - (pmsr & PMSR_PSAFE_ENABLE)) {
> + if (pmsr & PMSR_PSAFE_ENABLE) {
> thr
gt; @@ -414,6 +433,33 @@ static struct cpufreq_driver powernv_cpufreq_driver = {
> .attr = powernv_cpu_freq_attr,
What about the situation where although occ is active, this particular
chip has been throttled and we end up repeatedly reporting "pstate set
to safe" and &
for (i = 0; i < nr_chips; i++)
> + if (chips[i].id == chip_id)
> + schedule_work([i].throttle);
> }
Should we not do this only when we get unthrottled so as to cross verify
if it is indeed the case ? In case of thrott
On 05/05/2015 11:36 AM, Shilpasri G Bhat wrote:
> Hi Preeti,
>
> On 05/05/2015 09:21 AM, Preeti U Murthy wrote:
>> Hi Shilpa,
>>
>> On 05/04/2015 02:24 PM, Shilpasri G Bhat wrote:
>>> The On-Chip-Controller(OCC) can throttle cpu frequency by reducing the
>
On 05/05/2015 12:03 PM, Shilpasri G Bhat wrote:
> Hi Preeti,
>
> On 05/05/2015 09:30 AM, Preeti U Murthy wrote:
>> Hi Shilpa,
>>
>> On 05/04/2015 02:24 PM, Shilpasri G Bhat wrote:
>>> Re-evaluate the chip's throttled state on recieving OCC_THROTTLE
>>&
;
> }
> @@ -545,6 +571,7 @@ static int init_chip_info(void)
> chips[i].throttled = false;
> cpumask_copy([i].mask, cpumask_of_node(chip[i]));
> INIT_WORK([i].throttle, powernv_cpufreq_work_fn);
> + chips[i].restore = false;
> }
>
> return 0;
>
Reviewed-by: Preeti U Murthy
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
to
output the broadcast masks for the range of nr_cpu_ids into
/proc/timer_list.
Signed-off-by: Preeti U Murthy
---
kernel/time/timer_list.c |8
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/kernel/time/timer_list.c b/kernel/time/timer_list.c
index c82b03c..1afc
_broadcast_exit();
> + if (entered_state == -EBUSY)
> + goto use_default;
>
> /*
>* Give the governor an opportunity to reflect on the outcome
> Index: linux-pm/drivers/cpuidle/cpuidle.c
> =
ate(struct cpuidle_d
> time_end = ktime_get();
> trace_cpu_idle_rcuidle(PWR_EVENT_EXIT, dev->cpu);
>
> + if (broadcast) {
> + if (WARN_ON_ONCE(!irqs_disabled()))
> + local_irq_disable();
> +
> + tick_broadcast_exit
active just because the timer is supposed to fire 5 minutes from now,
which is precisely what happens if we go the genpd way.
Hence I don't think we can trivially club timers with genpd unless we
have a way to power the timer PM domain down, depending on when it is
supposed to fire, in which case
it has no impact on the condition since tasks can migrate
> + * only from online cpus to other online cpus. Thus its safe
> + * to use raw_smp_processor_id.
> + */
> + TP_CONDITION(cpu_online(raw_smp_processor_id())),
>
> TP_STRUCT__entry(
> __field(un
On 01/22/2015 04:45 PM, Thomas Gleixner wrote:
> On Thu, 22 Jan 2015, Preeti U Murthy wrote:
>> On 01/21/2015 05:16 PM, Thomas Gleixner wrote:
>> How about when the cpu that is going offline receives a timer interrupt
>> just before setting its state to CPU_DEAD ? That is
On 01/23/2015 10:29 AM, Michael Ellerman wrote:
> On Tue, 2015-20-01 at 11:26:49 UTC, Preeti U Murthy wrote:
>> @@ -177,34 +178,39 @@ static int powernv_add_idle_states(void)
>> return nr_idle_states;
>> }
>>
>> -idle_state_la
does not expose residency
values, use default values as a fallback mechanism. While at it, handle some
cleanups.
Signed-off-by: Preeti U Murthy
---
Changes from V1: https://lkml.org/lkml/2015/1/19/221
1. Used a better API for reading the DT property values.
2. Code cleanups
drivers/cpuidle
On 01/20/2015 11:39 AM, Michael Ellerman wrote:
> On Mon, 2015-19-01 at 10:26:48 UTC, Preeti U Murthy wrote:
>> Today if a cpu handling broadcasting of wakeups goes offline, it hands over
>
> It's *the* cpu handling broadcasting of wakeups right? ie. there's only ever
> one
it explicitly.
It fixes the bug reported here:
http://linuxppc.10917.n7.nabble.com/offlining-cpus-breakage-td88619.html
Signed-off-by: Preeti U Murthy
---
Changes from V1: https://lkml.org/lkml/2015/1/19/168
1. Modified the Changelog
kernel/time/clockevents.c|2 +-
kernel/time/tick-broadcast.c
it explicitly.
It fixes the bug reported here:
http://linuxppc.10917.n7.nabble.com/offlining-cpus-breakage-td88619.html
Signed-off-by: Preeti U Murthy
---
Changes from previous versions:
1. Modification to the changelog
2. Clarified the comments
kernel/time/clockevents.c|2 +-
kernel/time/tick
On 01/20/2015 04:51 PM, Thomas Gleixner wrote:
> On Mon, 19 Jan 2015, Preeti U Murthy wrote:
>> An idle cpu enters cpu_idle_poll() if it is set in the
>> tick_broadcast_force_mask.
>> This is so that it does not incur the overhead of entering idle states when
>> it is
On 01/20/2015 11:15 AM, Michael Ellerman wrote:
> On Mon, 2015-19-01 at 11:32:51 UTC, Preeti U Murthy wrote:
>> The device tree now exposes the residency values for different idle states.
>> Read
>> these values instead of calculating residency from the latency values. The
On 01/21/2015 03:26 PM, Thomas Gleixner wrote:
> On Tue, 20 Jan 2015, Preeti U Murthy wrote:
>> On 01/20/2015 04:51 PM, Thomas Gleixner wrote:
>>> On Mon, 19 Jan 2015, Preeti U Murthy wrote:
>>>> An idle cpu enters cpu_idle_poll() if it is set in the
or tick_check_broadcast_expired() returns false, without setting
the resched flag. So a cpu will be caught in cpu_idle_poll() needlessly,
thereby wasting power. Add an explicit check on cpu_idle_force_poll and
tick_check_broadcast_expired() to the exit condition of cpu_idle_poll()
to avoid this.
Signed-off-by: Preeti U
power. Hence exit
the idle
poll loop if the tick_broadcast_force_mask gets cleared and enter idle states.
Of course if the cpu has entered cpu_idle_poll() on being asked to poll
explicitly,
it continues to poll till it is asked to reschedule.
Signed-off-by: Preeti U Murthy
---
kernel/sched
phase so that it is visible to
all cpus right after exiting stop_machine, which is when they can re-enter idle.
This ensures that handover of the broadcast duty falls in place on offline,
without
having to do it explicitly.
Signed-off-by: Preeti U Murthy
---
kernel/time/clockevents.c|2
does not expose residency
values, use default values as a fallback mechanism. While at it, clump
the common parts of device tree parsing into one chunk.
Signed-off-by: Preeti U Murthy
---
drivers/cpuidle/cpuidle-powernv.c | 39 -
1 file changed, 25 insertions
On 01/21/2015 05:16 PM, Thomas Gleixner wrote:
> On Tue, 20 Jan 2015, Preeti U Murthy wrote:
>> diff --git a/kernel/time/clockevents.c b/kernel/time/clockevents.c
>> index 5544990..f3907c9 100644
>> --- a/kernel/time/clockevents.c
>> +++ b/kernel/time/clockevents.c
the broadcast timer upon itself so as to seamlessly
continue both these operations.
It fixes the bug reported here:
http://linuxppc.10917.n7.nabble.com/offlining-cpus-breakage-td88619.html
Signed-off-by: Preeti U Murthy
---
Changes from V3: https://lkml.org/lkml/2015/1/20/236
1. Move handover of broadcast
does not expose residency
values, use default values as a fallback mechanism. While at it, use better
APIs to parse the powermgmt device tree node so as to avoid endianness
transformation.
Signed-off-by: Preeti U Murthy
---
Changes from V2: https://lkml.org/lkml/2015/1/27/1054
1. Used APIs
On 02/02/2015 12:09 PM, Michael Ellerman wrote:
> On Mon, 2015-02-02 at 10:40 +0530, Preeti U Murthy wrote:
>> The device tree now exposes the residency values for different idle states.
>> Read
>> these values instead of calculating residency from the latency values. The
does not expose residency
values, use default values as a fallback mechanism. While at it, use better
APIs to parse the powermgmt device tree node.
Signed-off-by: Preeti U Murthy
Acked-by: Stewart Smith
Acked-by: Michael Ellerman
---
Changes from the previous versions: https://lkml.org/lkml/2015/2
We currently read the information about idle states from the DT
so as to populate the cpuidle table. Use those APIs to read from
the DT that can avoid endianness conversions of the property values
in the cpuidle driver.
Signed-off-by: Preeti U Murthy
---
This patch is based ontop of the mainline
We currently read the information about idle states from the DT
so as to find out the cpu idle states supported by the platform.
Use those APIs to read from the DT that can avoid endianness
conversions of the property values.
Signed-off-by: Preeti U Murthy
---
arch/powerpc/platforms/powernv
mer to force a scheduler tick to update the
jiffies. Since this happens on cpus in a package, all of them get soft
lockedup.
Hope the above explanation makes sense.
Regards
Preeti U Murthy
On 12/12/2014 05:27 PM, Viresh Kumar wrote:
> Cc'ing Thomas as well..
>
> On 12 December 2014 at 01:1
On 12/15/2014 03:02 PM, Viresh Kumar wrote:
> On 15 December 2014 at 12:55, Preeti U Murthy
> wrote:
>> Hi Viresh,
>>
>> Let me explain why I think this is happening.
>>
>> 1. tick_nohz_irq_enter/exit() both get called *only if the cpu is idle*
>> and
rupt()) {
> /*
> * Prevent raise_softirq from needlessly waking up ksoftirqd
> * here, as softirq will be serviced on return from interrupt.
> @@ -363,7 +363,7 @@ static inline void tick_irq_exit(void)
> int cpu = smp_processor_id();
On 11/06/2014 05:57 PM, Daniel Lezcano wrote:
> On 11/06/2014 05:08 AM, Preeti U Murthy wrote:
>> On 11/05/2014 07:58 PM, Daniel Lezcano wrote:
>>> On 10/29/2014 03:01 AM, Preeti U Murthy wrote:
>>>> On 10/29/2014 12:29 AM, Daniel Lezcano wrote:
>>>>>
On 11/06/2014 07:12 PM, Daniel Lezcano wrote:
>
> Preeti,
>
> I am wondering if we aren't going to a false debate.
>
> If the latency_req is 0, we should just poll and not enter in any idle
> state even if one has zero exit latency. With a zero latency req, we
&
cpu_idle_poll(void)
> +{
> + rcu_idle_enter();
> + trace_cpu_idle_rcuidle(0, smp_processor_id());
> + arch_cpu_idle_poll();
> + trace_cpu_idle_rcuidle(PWR_EVENT_EXIT, smp_processor_id());
> + rcu_idle_exit();
> + return 1;
> +}
> +
> /**
> * cpuidle_idle_ca
ces the latency constraint specified
> externally, so one more step to the cpuidle/scheduler integration.
>
> Signed-off-by: Daniel Lezcano
> Acked-by: Nicolas Pitre
> Acked-by: Peter Zijlstra (Intel)
> Reviewed-by: Len Brown
> ---
Reviewed-by: Preeti U Murthy
Regards
or an opportunity to reflect on the outcome
>*/
> - cpuidle_reflect(dev, entered_state);
> + if (entered_state >= 0)
> + cpuidle_reflect(dev, entered_state);
>
> exit_idle:
> __current_set_polling();
>
Reviewed-by: Preeti U. Murthy
--
To unsubscr
t; This patch does not change the current behavior.
>
> Signed-off-by: Daniel Lezcano
> Acked-by: Nicolas Pitre
> Reviewed-by: Len Brown
> ---
This patch looks good to me as well.
Reviewed-by: Preeti U. Murthy
--
To unsubscribe from this list: send the line "unsubscribe linux-kern
checks on any debug parameters such as powersave_nap .We will then only
need to check for powersave_nap == 0 and return only if that is the
case. This check is still required since the user can disable all deep
idle states by setting powersave_nap to 0.
Regards
Preeti U Murthy
On 10/27/2014 06:56 PM
the logic of checking the
exit_latency, we thought it would be simpler to call into an arch
defined polling idle loop under the above circumstances. If that is no
better we could fall back to cpuidle_idle_loop().
Regards
Preeti U Murthy
--
To unsubscribe from this list: send the line "unsu
. The power numbers have very little
variation between the runs with and without the patchset.
Thanks
Regards
Preeti U Murthy
On 11/25/2014 04:47 PM, Shreyas B. Prabhu wrote:
> Deep idle states like sleep and winkle are per core idle states. A core
> enters these states only when all the threads
re of frequency scaling
each time and there is no need of explicit synchronization between the
policy cpus to do this.
Regards
Preeti U Murthy
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info
On 11/05/2014 07:58 PM, Daniel Lezcano wrote:
> On 10/29/2014 03:01 AM, Preeti U Murthy wrote:
>> On 10/29/2014 12:29 AM, Daniel Lezcano wrote:
>>> On 10/28/2014 04:51 AM, Preeti Murthy wrote:
>>>> Hi Daniel,
>>>>
>>>> On Thu, Oct 23, 2
idle task was there
in the first place in the below code paths. It would help if you could
clarify this in the changelog as well.
>
> So lets remove it.
>
> Cc: Christoph Lameter
> Cc: Ingo Molnar
> Cc; John Stultz
> Cc: Peter Zijlstra
> Cc: Preeti U Murthy
> Cc: Rik v
x cpu timers ? These call sites
seem to be concerned about specifically waking up nohz_full cpus as far
as I can see. IOW there is no scheduling ipi that we can fall back on in
these paths.
> careful review of resched_curr() callers.
>
Regards
Preeti U Murthy
--
To unsubscribe from this l
x cpu timers ? These call sites
seem to be concerned about specifically waking up nohz_full cpus as far
as I can see. IOW there is no scheduling ipi that we can fall back on in
these paths.
> careful review of resched_curr() callers.
>
Regards
Preeti U Murthy
--
To unsubscribe from this l
atleast on powerpc after handling an interrupt, we will
call irq_exit() and reevaluate starting of ticks. So in our case even if
scheduler_ipi() callers do not call irq_exit(), it will be called after
handling the reschedule interrupt.
Regards
Preeti U Murthy
>
--
To unsubscribe from this l
man Khandual
> Cc: Stephane Eranian
> Cc: Preeti U Murthy
> Cc: Ingo Molnar
> Cc: Peter Zijlstra
> Signed-off-by: Madhavan Srinivasan
> ---
> +static void nest_change_cpu_context(int old_cpu, int new_cpu)
> +{
> + int i;
> +
> + if
; not to preempt the currently running task to switch to
> it yet, but we will want to preempt the currently running
> task at a later point in time?
+1. This is not taken care of as far as I can see too.
Regards
Preeti U Murthy
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
on the hrtimer
mode of broadcast in periodic mode. This patch takes care of doing this
on powerpc. The cpus would not have entered into such deep cpuidle
states in periodic mode on powerpc anyway. So there is no loss here.
Signed-off-by: Preeti U Murthy
---
drivers/cpuidle/cpuidle-powernv.c | 15
e CPUs which are not Full Dynticks
> in FULL_NOHZ configured systems. It will not bring about functional
> changes if NOHZ_FULL is not configured, because is_housekeeping_cpu()
> always returns true in CONFIG_NO_HZ_FULL=n.
>
> Signed-off by: Vatika Harlalka
> ---
Reviewed-by: Preet
hout TICK_ONESHOT, the
>> machine will hang.
>
> OK, which -stable? All of them or any specific series?
This needs to go into stable/linux-3.19.y,
stable/linux-4.0.y, stable/linux-4.1.y.
Thanks
Regards
Preeti U Murthy
>
> Rafael
>
On 06/01/2015 12:49 PM, Viresh Kumar wrote:
> On 01-06-15, 01:40, Preeti U Murthy wrote:
>
> I have to mention that this is somewhat inspired by:
>
> https://git.linaro.org/people/viresh.kumar/linux.git/commit/1e37f1d6ae12f5896e4e216f986762c3050129a5
>
> and I was waitin
On 06/02/2015 11:09 AM, Viresh Kumar wrote:
> On 02-06-15, 11:01, Preeti U Murthy wrote:
>> How will a policy lock help here at all, when cpus from multiple
>> policies are calling into __cpufreq_governor() ? How will a policy lock
>> serialize their entry into cpufreq_gov
On 06/02/2015 11:41 AM, Viresh Kumar wrote:
> On 02-06-15, 11:33, Preeti U Murthy wrote:
>> No, dbs_data is a governor wide data structure and not a policy wide
>
> Yeah, that's the common part which I was referring to. But normally
> its just read for policies in START/STOP, th
1101 - 1200 of 1331 matches
Mail list logo