2013/4/12 Peter Zijlstra :
> On Fri, 2013-04-12 at 12:50 +0200, Peter Zijlstra wrote:
>
>> I'll try and dig through the rest of your email later.. sorry for
>> being
>> a tad slow etc.
>
>
> So at thread_group_cputimer() we initialize the cputimer->cputime state
> by using thread_group_cputime() wh
2013/4/17 Olivier Langlois :
> Move the call to stop_process_timers() in order to:
>
> 1. It catches the exceptionnal case where it would be
>started without arming any timers in posix_cpu_timer_set()
Oh I see now. fastpath_timer_check() sees the sig->cputimer.running
but return 0 because the
2013/4/19 Frederic Weisbecker :
>> if (!task_cputime_zero(&tsk->cputime_expires)) {
>> struct task_cputime task_sample = {
>> - .utime = utime,
>> - .stime = stime,
>>
IPIs, is a stub
that will be implemented when we get the tick stop/restart infrastructure
in.
Signed-off-by: Frederic Weisbecker
Cc: Chris Metcalf
Cc: Christoph Lameter
Cc: Geoff Levand
Cc: Gilad Ben Yossef
Cc: Hakan Akkan
Cc: Ingo Molnar
Cc: Kevin Hilman
Cc: Li Zhong
Cc: Oleg Nesterov
Cc
modification.
Signed-off-by: Frederic Weisbecker
Cc: Chris Metcalf
Cc: Christoph Lameter
Cc: Geoff Levand
Cc: Gilad Ben Yossef
Cc: Hakan Akkan
Cc: Ingo Molnar
Cc: Kevin Hilman
Cc: Li Zhong
Cc: Oleg Nesterov
Cc: Paul E. McKenney
Cc: Paul Gortmaker
Cc: Peter Zijlstra
Cc: Steven Rostedt
Cc: Thomas
Bring a new helper that the full dynticks infrastructure can
call in order to know if it can safely stop the tick from
the posix cpu timers subsystem point of view.
Signed-off-by: Frederic Weisbecker
Cc: Chris Metcalf
Cc: Christoph Lameter
Cc: Geoff Levand
Cc: Gilad Ben Yossef
Cc: Hakan
found while discussing a patch with Olivier Langlois.
As a bonus, it also provides the other side of posix cpu timers handling
in dynticks with the new helper to check before stopping the tick.
Thanks.
Frederic Weisbecker (3):
nohz: New APIs to re-evaluate the tick on full dynticks CPUs
Kick the current CPU's tick by sending it a self IPI when
an event is queued on the rotation list and it is the first
element inserted. This makes sure that perf_event_task_tick()
works on full dynticks CPUs.
Signed-off-by: Frederic Weisbecker
Cc: Chris Metcalf
Cc: Christoph Lameter
Cc:
subsystems providing *_can_stop_tick()
helpers suggested by Peter Zijlstra a while ago).
Signed-off-by: Frederic Weisbecker
Cc: Chris Metcalf
Cc: Christoph Lameter
Cc: Geoff Levand
Cc: Gilad Ben Yossef
Cc: Hakan Akkan
Cc: Ingo Molnar
Cc: Kevin Hilman
Cc: Li Zhong
Cc: Oleg Nesterov
Cc: Paul
a self IPI to avoid messing
up with any current lock scenario.
Signed-off-by: Frederic Weisbecker
Cc: Chris Metcalf
Cc: Christoph Lameter
Cc: Geoff Levand
Cc: Gilad Ben Yossef
Cc: Hakan Akkan
Cc: Ingo Molnar
Cc: Kevin Hilman
Cc: Li Zhong
Cc: Oleg Nesterov
Cc: Paul E. McKenney
Cc: Paul
duler IPI.
(Reusing the scheduler IPI rather than implementing
a new IPI was suggested by Peter Zijlstra a while ago)
Signed-off-by: Frederic Weisbecker
Cc: Chris Metcalf
Cc: Christoph Lameter
Cc: Geoff Levand
Cc: Gilad Ben Yossef
Cc: Hakan Akkan
Cc: Ingo Molnar
Cc: Kevin Hilman
Cc: Li Zhon
Eventually try to disable tick on irq exit, now that the
fundamental infrastructure is in place.
Signed-off-by: Frederic Weisbecker
Cc: Chris Metcalf
Cc: Christoph Lameter
Cc: Geoff Levand
Cc: Gilad Ben Yossef
Cc: Hakan Akkan
Cc: Ingo Molnar
Cc: Kevin Hilman
Cc: Li Zhong
Cc: Oleg
Provide a new helper that help full dynticks CPUs to prevent
from stopping their tick in case there are events in the local
rotation list.
This way we make sure that perf_event_task_tick() is serviced
on demand.
Signed-off-by: Frederic Weisbecker
Cc: Chris Metcalf
Cc: Christoph Lameter
Cc
eg:
a CPU that runs a SCHED_FIFO task doesn't need to maintain
fairness against local pending tasks of the fair class.
But keep things simple for now.
Signed-off-by: Frederic Weisbecker
Cc: Chris Metcalf
Cc: Christoph Lameter
Cc: Geoff Levand
Cc: Gilad Ben Yossef
Cc: Hakan Akkan
Cc: I
infrastructure that performs the tick dependency
checks on irq exit and shut it down if these checks show that we
can do it safely.
Signed-off-by: Frederic Weisbecker
Cc: Chris Metcalf
Cc: Christoph Lameter
Cc: Geoff Levand
Cc: Gilad Ben Yossef
Cc: Hakan Akkan
Cc: Ingo Molnar
Cc: Kevin Hilman
Cc
Provide a new helper to be called from the full dynticks engine
before stopping the tick in order to make sure we don't stop
it when there is more than one task running on the CPU.
This way we make sure that the tick stays alive to maintain
fairness.
Signed-off-by: Frederic Weisbecker
Cc:
nce() usually called from the tick).
I hope we can handle these things progressively in the long run.
Thanks.
---
Frederic Weisbecker (10):
posix_timers: Fix pre-condition to stop the tick on full dynticks
perf: Kick full dynticks CPU if events rotation is needed
perf: New helper to preven
The test that checks if a CPU can stop its tick from posix CPU
timers angle was mistakenly inverted.
What we want is to prevent the tick from being stopped as long
as the current CPU's task runs a posix CPU timer.
Fix this.
Signed-off-by: Frederic Weisbecker
Cc: Chris Metcalf
Cc: Chri
from user who don't know how to diagnose why the
tick didn't stop on their settings.
Thanks.
---
Frederic Weisbecker (2):
nohz: Select wide RCU nocb for full dynticks
nohz: Add basic tracing
include/trace/events/timer.h | 21 +
kernel/time/Kconfig
full dynticks range.
Suggested-by: Christoph Lameter
Suggested-by: Paul E. McKenney
Signed-off-by: Frederic Weisbecker
Cc: Chris Metcalf
Cc: Christoph Lameter
Cc: Geoff Levand
Cc: Gilad Ben Yossef
Cc: Hakan Akkan
Cc: Ingo Molnar
Cc: Kevin Hilman
Cc: Li Zhong
Cc: Oleg Nesterov
Cc: Paul E
It's not obvious to find out why the full dynticks subsystem
doesn't always stop the tick: whether this is due to kthreads,
posix timers, perf events, etc...
These new tracepoints are here to help the user diagnose
the failures and test this feature.
Signed-off-by: Frederic Weisbecker
test CONFIG_NO_HZ_FULL, let's default disable the
watchdog on boot time when full dynticks is enabled.
The user can still enable it later on runtime using
proc or sysctl.
Reported-by: Steven Rostedt
Suggested-by: Peter Zijlstra
Signed-off-by: Frederic Weisbecker
Cc: Steven Rostedt
Cc: P
, let's instead only register the watchdog
threads when they are needed. This way we don't need to think about
hotplug operations and we don't burden the CPU onlining when the watchdog
is simply disabled.
Suggested-by: Srivatsa S. Bhat
Signed-off-by: Frederic Weisbecker
Cc: Sriva
tick.
For those who want to check through git:
git://git.kernel.org/pub/scm/linux/kernel/git/frederic/linux-dynticks.git
timers/core-preview
Thanks,
Frederic
---
Frederic Weisbecker (5):
sched: Disable lb_bias feature for full dynticks
watchdog: Register / unregiste
implements the other side: restart the tick from the
IPI if we need to report a quiescent state.
NOTE: we can probably do better and rather act from the IPI without
restarting the tick.
Signed-off-by: Frederic Weisbecker
Cc: Steven Rostedt
Cc: Paul E. McKenney
Cc: Ingo Molnar
Cc: Thomas Gl
Building full dynticks now implies that all CPUs are forced
into RCU nocb mode through CONFIG_RCU_NOCB_CPU_ALL.
The dynamic check has become useless.
Signed-off-by: Frederic Weisbecker
Cc: Steven Rostedt
Cc: Paul E. McKenney
Cc: Ingo Molnar
Cc: Andrew Morton
Cc: Thomas Gleixner
Cc: Peter
machine can not support it.
Signed-off-by: Steven Rostedt
Cc: Paul E. McKenney
Cc: Ingo Molnar
Cc: Andrew Morton
Cc: Thomas Gleixner
Cc: H. Peter Anvin
Cc: Peter Zijlstra
Cc: Borislav Petkov
Cc: Li Zhong
Signed-off-by: Frederic Weisbecker
---
kernel/time/tick-sched.c |5 +
1 files
ently the only user of the decayed
load records.
The first load index that represents the current runqueue load weight
is still maintained and usable.
Signed-off-by: Frederic Weisbecker
Cc: Ingo Molnar
Cc: Li Zhong
Cc: Paul E. McKenney
Cc: Peter Zijlstra
Cc: Steven Rostedt
Cc: Thomas Gleixne
Hi all,
I've been using a homemade test for full dynticks all that time and I regret I
haven't shared
it much sooner. I've been asked for such a tool many times and getting full
dynticks working
correctly is often not a piece of cake.
Here it is, a very basic test that runs a userspace loop for
On Wed, Jun 12, 2013 at 01:03:16PM -0400, Don Zickus wrote:
> On Wed, Jun 12, 2013 at 04:02:36PM +0200, Frederic Weisbecker wrote:
> > When the watchdog runs, it prevents the full dynticks
> > CPUs from stopping their tick because the hard lockup
> > detector uses perf events
On Sun, Jun 09, 2013 at 08:34:10PM +0200, Martin Steigerwald wrote:
> Am Samstag, 8. Juni 2013, 22:34:44 schrieb Martin Steigerwald:
> > Am Freitag, 24. Mai 2013, 13:03:18 schrieb Martin Steigerwald:
> > > Hi!
> > >
> > > With 3.10-rc2 I see fan always or almost always on, even during extended
> >
On Sat, Jun 01, 2013 at 09:45:26PM +0200, Oleg Nesterov wrote:
> Hello.
>
> Cleanups, on top of
>
> [PATCH 0/2]: WARN_ONCE in arch/x86/kernel/hw_breakpoint.c
So this series doesn't have the fix for the warning?
>
> series.
>
> Oleg.
>
> kernel/events/hw_breakpoint.c | 91
>
tch this triggers the same problem/WARN_ON(), after
> the patch it correctly fails with -ENOSPC.
>
> Reported-by: Vince Weaver
> Signed-off-by: Oleg Nesterov
> Cc:
Looks good, thanks!
Acked-by: Frederic Weisbecker
> ---
> kernel/events/hw_breakpoint.c |2 +-
>
On Thu, Jun 13, 2013 at 10:02:07AM -0400, Don Zickus wrote:
> On Thu, Jun 13, 2013 at 03:10:59PM +0200, Frederic Weisbecker wrote:
> > On Wed, Jun 12, 2013 at 01:03:16PM -0400, Don Zickus wrote:
> > > On Wed, Jun 12, 2013 at 04:02:36PM +0200, Frederic Weisbecker wrote:
> &g
On Thu, Jun 13, 2013 at 10:45:15AM -0400, Don Zickus wrote:
> On Thu, Jun 13, 2013 at 04:22:12PM +0200, Frederic Weisbecker wrote:
> > On Thu, Jun 13, 2013 at 10:02:07AM -0400, Don Zickus wrote:
> > > On Thu, Jun 13, 2013 at 03:10:59PM +0200, Frederic Weisbecker wrote:
> >
2013/6/13 Oleg Nesterov :
> On 06/13, Frederic Weisbecker wrote:
>>
>> On Sat, Jun 01, 2013 at 09:45:26PM +0200, Oleg Nesterov wrote:
>> > Hello.
>> >
>> > Cleanups, on top of
>> >
>> > [PATCH 0/2]: WARN_ONCE in arch/x86/kernel/hw_br
ers the same WARN_ONCE("Can't find any breakpoint slot") in
> arch_install_hw_breakpoint().
>
> Signed-off-by: Oleg Nesterov
> Cc:
Acked-by: Frederic Weisbecker
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vg
> Change toggle_bp_slot() to always call list_add/list_del after
> toggle_bp_task_slot(). This way old_idx is task_bp_pinned() and
> this entry should be decremented, new_idx is +/-weight and we
> need to increment this element. The code/logic looks obvious.
>
> Signed-off-by: O
arg.
>
> Signed-off-by: Oleg Nesterov
Acked-by: Frederic Weisbecker
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
the code and avoid the
> code duplication.
>
> Signed-off-by: Oleg Nesterov
Acked-by: Frederic Weisbecker
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.or
On Tue, Jun 04, 2013 at 10:21:06AM +0200, Vincent Guittot wrote:
> On 4 June 2013 00:48, Frederic Weisbecker wrote:
> > On Thu, May 30, 2013 at 05:23:05PM +0200, Vincent Guittot wrote:
> >> I have faced a sequence where the Idle Load Balance was sometime not
> >>
On Tue, Jun 04, 2013 at 11:36:11AM +0200, Peter Zijlstra wrote:
>
> The best I can seem to come up with is something like the below; but I think
> its ghastly. Surely we can do something saner with that bit.
>
> Having to clear it at 3 different places is just wrong.
We could clear the flag earl
happens when a user disables perf
> function tracing or other dynamically allocated function tracers, but it
> allows us to continue to debug RCU and context tracking with function
> tracing.
>
> Signed-off-by: Steven Rostedt
Acked-by: Frederic Weisbecker
If ftrace were to use rcu
On Tue, Jun 04, 2013 at 01:11:47PM +0200, Vincent Guittot wrote:
> On 4 June 2013 12:26, Frederic Weisbecker wrote:
> > On Tue, Jun 04, 2013 at 11:36:11AM +0200, Peter Zijlstra wrote:
> >>
> >> The best I can seem to come up with is something like the below; but I
On Tue, Jun 04, 2013 at 01:15:10PM +0200, Peter Zijlstra wrote:
> On Tue, Jun 04, 2013 at 12:26:22PM +0200, Frederic Weisbecker wrote:
> > @@ -1393,8 +1392,12 @@ static void sched_ttwu_pending(void)
> >
> > void scheduler_ipi(void)
> > {
> > - if (
tion in case user_exit() is traced
> + * and the tracer calls preempt_enable_notrace() causing
> + * an infinite recursion.
> + */
> + preempt_disable_notrace();
> + prev_ctx = exception_enter();
> + preempt_enable_no_resched_notrace();
> +
> + preempt_schedule();
>
On Fri, May 31, 2013 at 09:30:18PM -0400, Steven Rostedt wrote:
> diff --git a/kernel/context_tracking.c b/kernel/context_tracking.c
> index 65349f0..15c9f2e 100644
> --- a/kernel/context_tracking.c
> +++ b/kernel/context_tracking.c
> @@ -71,6 +71,44 @@ void user_enter(void)
> local_irq_resto
On Tue, Jun 04, 2013 at 08:16:29AM -0400, Steven Rostedt wrote:
> On Tue, 2013-06-04 at 14:09 +0200, Frederic Weisbecker wrote:
> >
> > > +/**
> > > + * preempt_schedule_context - preempt_schedule called by tracing
> > > + *
> > > + * The tracing
On Tue, Jun 04, 2013 at 08:11:21AM -0400, Steven Rostedt wrote:
> On Tue, 2013-06-04 at 13:03 +0200, Frederic Weisbecker wrote:
>
> > If ftrace were to use rcu_dereference_sched() instead of
> > rcu_dereference_raw(), I guess
> > the issue would have been detected before
On Tue, Jun 04, 2013 at 01:48:47PM +0200, Vincent Guittot wrote:
> On 4 June 2013 13:19, Frederic Weisbecker wrote:
> > On Tue, Jun 04, 2013 at 01:11:47PM +0200, Vincent Guittot wrote:
> >> On 4 June 2013 12:26, Frederic Weisbecker wrote:
> >> > On Tue, Jun 04, 2
On Tue, Jun 04, 2013 at 05:29:39PM +0200, Vincent Guittot wrote:
> On 4 June 2013 16:44, Frederic Weisbecker wrote:
> > On Tue, Jun 04, 2013 at 01:48:47PM +0200, Vincent Guittot wrote:
> >> On 4 June 2013 13:19, Frederic Weisbecker wrote:
> >> > On Tue, Jun 04, 201
2013/5/21 Srivatsa S. Bhat :
> On 05/20/2013 09:31 PM, Frederic Weisbecker wrote:
>> When the watchdog code is boot-disabled by the user, for example
>> through the 'nmi_watchdog=0' boot option, the setup() callback of
>> the watchdog kthread requests to park the t
On Thu, Jun 06, 2013 at 11:31:36AM -0400, Dave Jones wrote:
> On Tue, May 14, 2013 at 03:21:07AM +0200, Frederic Weisbecker wrote:
> > On Thu, May 09, 2013 at 05:10:26PM -0400, Dave Jones wrote:
> > > On Thu, May 09, 2013 at 11:02:08PM +0200, Frederic Weisbecker wrote:
>
per_cpu(*cpu_events, cpu).
>
> Signed-off-by: Oleg Nesterov
Acked-by: Frederic Weisbecker
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordo
On Tue, Jun 18, 2013 at 12:36:32PM +0200, Peter Zijlstra wrote:
> On Thu, Jun 13, 2013 at 10:02:07AM -0400, Don Zickus wrote:
> > On Thu, Jun 13, 2013 at 03:10:59PM +0200, Frederic Weisbecker wrote:
> > > On Wed, Jun 12, 2013 at 01:03:16PM -0400, Don Zickus wrote:
> > >
pack the
info
to a single struct.
But I'm not sure why you think using per-cpu is a problem. It's not only
deemed for optimized local uses, it's also convenient for allocations and
de-allocation, or static definitions. I'm not sure why bootmem would make
more sense.
Oth
ta_exec and we need
> consistency, but (3) need exact current time (aka cpu clock time) because
> an expires should be "now + timeout" by definition.
>
> This patch distinguishes between two kinds of "now".
>
> Cc: Olivier Langlois
> Cc: Thomas Gleixner
; So, 64bit can avoid holding rq lock when add_delta is false and
> delta_exec is 0.
>
> Cc: Olivier Langlois
> Cc: Thomas Gleixner
> Cc: Frederic Weisbecker
> Cc: Ingo Molnar
> Suggested-by: Paul Turner
> Acked-by: Peter Zijlstra
> Signed-off-by: KOSAKI Motohiro
> ---
On Tue, Jun 18, 2013 at 08:15:20AM -0700, Paul E. McKenney wrote:
> On Tue, Jun 18, 2013 at 02:20:35PM +, Christoph Lameter wrote:
> > On Wed, 12 Jun 2013, Frederic Weisbecker wrote:
> >
> > > Here it is, a very basic test that runs a userspace loop for ten seconds
On Tue, Jun 18, 2013 at 04:42:25PM +0200, Oleg Nesterov wrote:
> On 06/18, Frederic Weisbecker wrote:
> >
> > On Sun, Jun 02, 2013 at 09:50:57PM +0200, Oleg Nesterov wrote:
> > > This patch simply moves all per-cpu variables into the new single
> > > per-cpu "
On Tue, Jun 18, 2013 at 11:17:41AM -0400, KOSAKI Motohiro wrote:
> >> +#ifdef CONFIG_64BIT
> >> + /*
> >> + * 64-bit doesn't need locks to atomically read a 64bit value. So we
> >> + * have two optimization chances, 1) when caller doesn't need
> >> + * delta_exec and 2) when the
On Tue, Jun 18, 2013 at 11:17:33AM -0700, Paul E. McKenney wrote:
> On Tue, Jun 18, 2013 at 06:22:57PM +0200, Frederic Weisbecker wrote:
> > On Tue, Jun 18, 2013 at 08:15:20AM -0700, Paul E. McKenney wrote:
> > > On Tue, Jun 18, 2013 at 02:20:35PM +, Christoph Lameter wrote:
gt; is disabled (scheduler_tick_max_deferment() returns KTIME_MAX.)
>
> Cc: Frederic Weisbecker
> Signed-off-by: Kevin Hilman
This looks like a useful thing but I wonder if a debugfs file would
be more appropriate than sysctl.
The scheduler tick max deferment is supposed to be a temporary
hack so we probab
d on 32-bit ARM platform when extending the max
> deferment value.
>
> Cc: Frederic Weisbecker
> Signed-off-by: Kevin Hilman
Right, if we make it tunable we need that patch.
Thanks!
Acked-by: Frederic Weisbecker
> ---
> kernel/sched/core.c | 2 +-
> 1 file changed, 1 i
On Tue, Jun 18, 2013 at 11:12:17AM -0400, KOSAKI Motohiro wrote:
> On Tue, Jun 18, 2013 at 10:20 AM, Frederic Weisbecker
> wrote:
> > On Sun, May 26, 2013 at 05:35:44PM -0400, kosaki.motoh...@gmail.com wrote:
> >> From: KOSAKI Motohiro
> >>
> >> Curr
work on it. But since I'll be off next week, I prefer to have
at least a working temporary solution before the next merge window.
Thanks,
Frederic
---
Frederic Weisbecker (4):
watchdog: Register / unregister watchdog kthreads on sysctl control
watchdog: Rename conf
Us.
Anyway at least this patchset can help starting a discussion.
Those who want to play can fetch from:
git://git.kernel.org/pub/scm/linux/kernel/git/frederic/linux-dynticks.git
sched/core
Thanks,
Frederic
---
Frederic Weisbecker (4):
sched: Disable lb_bias featu
preempt_schedule() and preempt_schedule_context() open
code their preemptability checks.
Use the standard API instead for consolidation.
Signed-off-by: Frederic Weisbecker
Cc: Ingo Molnar
Cc: Li Zhong
Cc: Paul E. McKenney
Cc: Peter Zijlstra
Cc: Steven Rostedt
Cc: Thomas Gleixner
Cc
Gather the common code that computes the pending idle cpu load
to decay.
Signed-off-by: Frederic Weisbecker
Cc: Ingo Molnar
Cc: Li Zhong
Cc: Paul E. McKenney
Cc: Peter Zijlstra
Cc: Steven Rostedt
Cc: Thomas Gleixner
Cc: Borislav Petkov
Cc: Alex Shi
Cc: Paul Turner
Cc: Mike Galbraith
Cc
it is currently the only user of the decayed
load records.
The first load index that represents the current runqueue load weight
is still maintained and usable.
Signed-off-by: Frederic Weisbecker
Cc: Ingo Molnar
Cc: Li Zhong
Cc: Paul E. McKenney
Cc: Peter Zijlstra
Cc: Steven Rostedt
Cc: T
Now that the decaying cpu load stat indexes used by LB_BIAS
are ignored in full dynticks mode, let's conditionally build
that code to optimize the off case.
Signed-off-by: Frederic Weisbecker
Cc: Ingo Molnar
Cc: Li Zhong
Cc: Paul E. McKenney
Cc: Peter Zijlstra
Cc: Steven Rostedt
Cc: T
eported-by: Steven Rostedt
Signed-off-by: Frederic Weisbecker
Cc: Jiri Bohac
Cc: Steven Rostedt
Cc: Paul E. McKenney
Cc: Ingo Molnar
Cc: Thomas Gleixner
Cc: Peter Zijlstra
Cc: Borislav Petkov
---
kernel/time/tick-broadcast.c | 11 ---
1 files changed, 8 insertions(+), 3 deletions(-)
hanks,
Frederic
---
Frederic Weisbecker (4):
vtime: Use consistent clocks among nohz accounting
watchdog: Boot-disable by default on full dynticks
kvm: Move guest entry/exit APIs to context_tracking
nohz: Prevent broadcast source from stealing full dynticks timekeeping
duty
Li Zho
On Mon, May 13, 2013 at 04:03:13PM +0800, Li Zhong wrote:
> On Mon, 2013-05-13 at 15:51 +1000, Benjamin Herrenschmidt wrote:
> > On Mon, 2013-05-13 at 13:21 +0800, Li Zhong wrote:
> > > These patches try to support context tracking for Power arch, beginning
> > > with
> > > 64-bit pSeries. The cod
On Mon, May 13, 2013 at 06:59:23PM +1000, Benjamin Herrenschmidt wrote:
> On Mon, 2013-05-13 at 16:03 +0800, Li Zhong wrote:
> >
> > To my understanding, it is used to enable RCU user extended quiescent
> > state, so RCU on that cpu doesn't need scheduler ticks. And together
> > with some other co
#x27;:
> kernel/sched/fair.c:2159:13: warning: unused variable 'rq' [-Wunused-variable]
>
> Signed-off-by: Kamalesh Babulal
Thanks!
Acked-by: Frederic Weisbecker
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message t
On Thu, May 30, 2013 at 03:57:17PM +0200, Thomas Gleixner wrote:
> On Wed, 29 May 2013, Frederic Weisbecker wrote:
>
> > The timekeeping duty is currently assigned to the CPU that
> > handles the tick broadcast clock device by the time it is set in
> > one shot mode.
>
get, let's simply remove it.
Signed-off-by: Jiri Bohac
Reported-by: Steven Rostedt
Acked-by: Thomas Gleixner
Cc: Steven Rostedt
Cc: Thomas Gleixner
Cc: Paul E. McKenney
Cc: Ingo Molnar
Cc: Peter Zijlstra
Cc: Borislav Petkov
Signed-off-by: Frederic Weisbecker
---
kernel/time/tick-bro
On Wed, May 29, 2013 at 06:39:39PM +0200, Frederic Weisbecker wrote:
> Ingo,
>
> Please pull the timers/urgent-for-tip branch that can be found at:
>
> git://git.kernel.org/pub/scm/linux/kernel/git/frederic/linux-dynticks.git
> timers/urgent-for-tip
Please rather pull
On Fri, May 31, 2013 at 05:43:25AM -0700, tip-bot for Frederic Weisbecker wrote:
> Commit-ID: 81e747d2ccfa9faa2b10507dca9d4b42796c561e
> Gitweb: http://git.kernel.org/tip/81e747d2ccfa9faa2b10507dca9d4b42796c561e
> Author: Frederic Weisbecker
> AuthorDate: Tue, 28 May 2013 15
On Fri, May 31, 2013 at 02:50:18PM +0200, Ingo Molnar wrote:
>
> * Frederic Weisbecker wrote:
>
> > On Fri, May 31, 2013 at 05:43:25AM -0700, tip-bot for Frederic Weisbecker
> > wrote:
> > > Commit-ID: 81e747d2ccfa9faa2b10507dca9d4b42796c561e
> > > Gitwe
On Thu, May 30, 2013 at 03:59:41PM -0400, Steven Rostedt wrote:
> [ Peter and Frederic, can you give me ACKs on this? Thanks ]
>
> Dave Jones hit the following bug report:
>
> ===
> [ INFO: suspicious RCU usage. ]
> 3.10.0-rc2+ #1 Not tainted
>
On Fri, May 31, 2013 at 11:18:48AM -0400, Steven Rostedt wrote:
> On Fri, 2013-05-31 at 15:43 +0200, Frederic Weisbecker wrote:
>
> > > +void __sched notrace preempt_schedule_context(void)
> > > +{
> > > + struct thread_info *ti = current_thread_info()
f your change anyway because we still want to
avoid
the double count, so: Acked-by: Frederic Weisbecker
Thanks.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kern
x27; would be useful, and
>accepting handler names would be useful as well.
How about we define finegrained context on top of perf events themselves?
Like we could tell perf to count a task's instructions only after
tracepoint:irq_entry is hit and stop counting when tracepoint:irq_ex
On Mon, Jun 03, 2013 at 11:47:17AM +0200, Stefan Seyfried wrote:
> Am 20.05.2013 18:01, schrieb Frederic Weisbecker:
> > While computing the cputime delta of dynticks CPUs,
> > we are mixing up clocks of differents natures:
>
> [...]
>
> > As a consequence, some s
On Mon, Jun 03, 2013 at 09:51:37PM +0200, Stefan Seyfried wrote:
> Am 03.06.2013 21:48, schrieb Frederic Weisbecker:
> > On Mon, Jun 03, 2013 at 11:47:17AM +0200, Stefan Seyfried wrote:
> >> FWIW:
> >> Tested-by: Stefan Seyfried
> >>
> >> This pat
On Thu, May 30, 2013 at 05:23:05PM +0200, Vincent Guittot wrote:
> I have faced a sequence where the Idle Load Balance was sometime not
> triggered for a while on my platform.
>
> CPU 0 and CPU 1 are running tasks and CPU 2 is idle
>
> CPU 1 kicks the Idle Load Balance
> CPU 1 selects CPU 2 as t
On Fri, Jun 28, 2013 at 01:10:21PM -0700, Paul E. McKenney wrote:
> /*
> + * Unconditionally force exit from full system-idle state. This is
> + * invoked when a normal CPU exits idle, but must be called separately
> + * for the timekeeping CPU (tick_do_timer_cpu). The reason for this
> + * is t
On Fri, Jun 28, 2013 at 01:10:21PM -0700, Paul E. McKenney wrote:
> +
> +/*
> + * Check to see if the system is fully idle, other than the timekeeping CPU.
> + * The caller must have disabled interrupts.
> + */
> +bool rcu_sys_is_idle(void)
> +{
> + static struct rcu_sysidle_head rsh;
> + i
On Mon, Jul 01, 2013 at 11:10:40AM -0700, Paul E. McKenney wrote:
> On Mon, Jul 01, 2013 at 06:35:31PM +0200, Frederic Weisbecker wrote:
> > What makes sure that we are not reading a stale value of rdtp->dynticks_idle
> > in the following scenario:
> >
> > CPU 0
On Fri, Jun 28, 2013 at 01:10:21PM -0700, Paul E. McKenney wrote:
> +/*
> + * Check to see if the system is fully idle, other than the timekeeping CPU.
> + * The caller must have disabled interrupts.
> + */
> +bool rcu_sys_is_idle(void)
Where is this function called? I can't find any caller in the
Cleaning up the posix cpu timers on task exit shares some common code
among timer list types, most notably the list traversal and expiry time
update.
Unify this in a common helper.
Signed-off-by: Frederic Weisbecker
Cc: Stanislaw Gruszka
Cc: Thomas Gleixner
Cc: Peter Zijlstra
Cc: Ingo Molnar
significant chunk. They already missed a few releases.
This pile includes quite a part of these. Kosaki also has some pending patches
that we are still discussing a bit. But I'll try to take care of these as well.
Thanks,
Frederic
---
Frederic Weisbecker (6):
posix_cpu_timer: consol
ner
Cc: Ingo Molnar
Cc: Peter Zijlstra
Acked-by: Peter Zijlstra
Signed-off-by: Olivier Langlois
Signed-off-by: KOSAKI Motohiro
Signed-off-by: Frederic Weisbecker
---
kernel/sched/stats.h | 39 ---
1 files changed, 36 insertions(+), 3 deletions(-)
diff
$ ./posix_cpu_timers
6 2278074
After the patch:
$ ./posix_cpu_timers
8 1158766
Before the patch, the elapsed time got two more seconds spuriously accounted.
Signed-off-by: Frederic Weisbecker
Cc: Stanislaw Gruszka
Cc: Thomas Gleixner
Cc: Peter Zijlstr
breakages while hacking
on this subsystem.
Signed-off-by: Frederic Weisbecker
Cc: Stanislaw Gruszka
Cc: Thomas Gleixner
Cc: Peter Zijlstra
Cc: Steven Rostedt
Cc: KOSAKI Motohiro
Cc: Olivier Langlois
Signed-off-by: Andrew Morton
---
tools/testing/selftests/Makefile |1 +
tools
Consolidate the common code amongst per thread and per process timers list
on tick time.
List traversal, expiry check and subsequent updates can be shared in a
common helper.
Signed-off-by: Frederic Weisbecker
Cc: Stanislaw Gruszka
Cc: Thomas Gleixner
Cc: Peter Zijlstra
Cc: Ingo Molnar
Cc
. cputime_t can always fit
into it.
Signed-off-by: Frederic Weisbecker
Cc: Stanislaw Gruszka
Cc: Thomas Gleixner
Cc: Peter Zijlstra
Cc: Ingo Molnar
Cc: Oleg Nesterov
Cc: KOSAKI Motohiro
Cc: Olivier Langlois
Signed-off-by: Andrew Morton
---
include/linux/posix-timers.h | 16 ++-
kernel
ported-by: Chen Gang
Signed-off-by: Frederic Weisbecker
Cc: Stanislaw Gruszka
Cc: Thomas Gleixner
Cc: Peter Zijlstra
Cc: Ingo Molnar
Cc: Oleg Nesterov
Cc: Chen Gang
Cc: KOSAKI Motohiro
Cc: Olivier Langlois
Signed-off-by: Andrew Morton
---
kernel/posix-cpu-timers.c |1 +
1 files ch
601 - 700 of 4755 matches
Mail list logo