__do_softirq()
* more comments
* introduction of a longer term solution via a new arch symbol for archs
to tell about irq_exit() stack coverage.
Thanks.
Frederic Weisbecker (7):
irq: Force hardirq exit's softirq processing on its own stack
irq: Consolidate do_softirq() arch over
t one step further and generalize that debug check to
any softirq processing.
Signed-off-by: Frederic Weisbecker
Cc: Benjamin Herrenschmidt
Cc: Paul Mackerras
Cc: Ingo Molnar
Cc: Thomas Gleixner
Cc: Peter Zijlstra
Cc: H. Peter Anvin
Cc: Linus Torvalds
Cc: Paul Mackerras
Cc: James Hogan
Cc:
there.
x86-32 is not concerned because it only runs the irq handler on
the irq stack.
Signed-off-by: Frederic Weisbecker
Cc: Benjamin Herrenschmidt
Cc: Paul Mackerras
Cc: Ingo Molnar
Cc: Thomas Gleixner
Cc: Peter Zijlstra
Cc: H. Peter Anvin
Cc: Linus Torvalds
Cc: Paul Mackerras
Cc: James
defined when irq_exit() runs on the irq stack. That way
we can spare some stack switch on irq processing and all the cache
issues that come along.
Signed-off-by: Frederic Weisbecker
Cc: Benjamin Herrenschmidt
Cc: Paul Mackerras
Cc: Ingo Molnar
Cc: Thomas Gleixner
Cc: Peter Zijlstra
Cc: H. P
For clarity, comment the various stack choices for softirqs
processing, whether we execute them from ksoftirqd or
local_irq_enable() calls.
Their use on irq_exit() is already commented.
Signed-off-by: Frederic Weisbecker
Cc: Benjamin Herrenschmidt
Cc: Paul Mackerras
Cc: Ingo Molnar
Cc
Before processing softirqs on hardirq exit, we already
do the check for pending softirqs while hardirqs are
guaranteed to be disabled.
So we can take a shortcut and safely jump to the arch
specific implementation directly.
Signed-off-by: Frederic Weisbecker
Cc: Benjamin Herrenschmidt
Cc: Paul
switch.
Signed-off-by: Frederic Weisbecker
Cc: Benjamin Herrenschmidt
Cc: Paul Mackerras
Cc: Ingo Molnar
Cc: Thomas Gleixner
Cc: Peter Zijlstra
Cc: H. Peter Anvin
Cc: Linus Torvalds
Cc: Paul Mackerras
Cc: James Hogan
Cc: James E.J. Bottomley
Cc: Helge Deller
Cc: Martin Schwidefsky
Cc
On Sat, Sep 08, 2012 at 07:55:16AM -0400, Steven Rostedt wrote:
> According to Steven R. there is no reason left to not support
> function tracing for the perf core. This makes it easier to debug
> perf.
>
> Don't remove -pg for the x86 and generic perf core.
Actually, perf can use function traci
Hi,
More cleanups for the cputime code.
Tested on x86 and built-tested only on ia64, powerpc and s390.
This is pullable from:
git://github.com/fweisbec/linux-dynticks.git
cputime/cleanups (based on tip:sched/core)
Frederic Weisbecker (6):
cputime: Use a proper subsystem
e
want to find out the context we account to from generic code.
This also make it better to know on which subsystem these APIs
refer to.
Signed-off-by: Frederic Weisbecker
Cc: Tony Luck
Cc: Fenghua Yu
Cc: Benjamin Herrenschmidt
Cc: Paul Mackerras
Cc: Heiko Carstens
Cc: Martin Schwidefsky
Cc
Factorize the code that accounts user time into a
single function to avoid code duplication.
Signed-off-by: Frederic Weisbecker
Cc: Tony Luck
Cc: Fenghua Yu
Cc: Benjamin Herrenschmidt
Cc: Paul Mackerras
Cc: Martin Schwidefsky
Cc: Heiko Carstens
Cc: Ingo Molnar
Cc: Thomas Gleixner
Cc
This debloats a bit the general config menu and make these
config options easier to find.
Signed-off-by: Frederic Weisbecker
Cc: Tony Luck
Cc: Fenghua Yu
Cc: Benjamin Herrenschmidt
Cc: Paul Mackerras
Cc: Martin Schwidefsky
Cc: Heiko Carstens
Cc: Ingo Molnar
Cc: Thomas Gleixner
Cc: Peter
generic code to decide when to call which API.
Archs that have their own meaning of idle time, such as s390
that only considers the time spent in CPU low power mode as idle
time, can just override vtime_account().
Signed-off-by: Frederic Weisbecker
Cc: Tony Luck
Cc: Fenghua Yu
Cc: Benjamin
.
Signed-off-by: Frederic Weisbecker
Cc: Tony Luck
Cc: Fenghua Yu
Cc: Benjamin Herrenschmidt
Cc: Paul Mackerras
Cc: Martin Schwidefsky
Cc: Heiko Carstens
Cc: Ingo Molnar
Cc: Thomas Gleixner
Cc: Peter Zijlstra
Cc: Russell King
---
arch/Kconfig |6 ++
arch/x86/Kconfig | 12
To avoid code duplication.
Signed-off-by: Frederic Weisbecker
Cc: Tony Luck
Cc: Fenghua Yu
Cc: Benjamin Herrenschmidt
Cc: Paul Mackerras
Cc: Martin Schwidefsky
Cc: Heiko Carstens
Cc: Ingo Molnar
Cc: Thomas Gleixner
Cc: Peter Zijlstra
---
arch/ia64/kernel/time.c | 11 +++
1
On Mon, Sep 10, 2012 at 09:43:13PM +0200, Frederic Weisbecker wrote:
> There is no known reason for this option to be unavailable on other
> archs than x86. They just need to call enable_sched_clock_irqtime()
> if they have a sufficiently finegrained clock to make it working.
>
>
On Thu, Sep 06, 2012 at 07:13:11PM +0200, Peter Zijlstra wrote:
> On Thu, 2012-09-06 at 19:02 +0200, Peter Zijlstra wrote:
> > On Thu, 2012-08-30 at 14:05 -0700, Paul E. McKenney wrote:
> > > From: Frederic Weisbecker
> > >
> > > When exceptions or irq are abo
On Thu, Sep 06, 2012 at 06:52:44PM +0200, Peter Zijlstra wrote:
> On Thu, 2012-08-30 at 14:05 -0700, Paul E. McKenney wrote:
> > From: Frederic Weisbecker
> >
> > When an exception or an irq exits, and we are going to resume into
> > interrupted kernel code, the low le
On Tue, Sep 04, 2012 at 05:46:19PM -0700, Josh Triplett wrote:
> > It actually does depend on SMP. There has to be at least one CPU taking
> > scheduling-clock interrupts in order to keep time computation accurate,
> > so a de-facto UP system cannot adaptive-dynticks its sole CPU.
>
> Ah. That s
On Fri, Aug 31, 2012 at 04:59:10PM -0700, Josh Triplett wrote:
> On Thu, Aug 30, 2012 at 02:05:25PM -0700, Paul E. McKenney wrote:
> > From: Frederic Weisbecker
> >
> > Add syscall slow path hooks to notify syscall entry
> > and exit on CPUs that want to support
On Sun, Aug 26, 2012 at 11:21:37AM +0200, Tobias Klausmann wrote:
> Hi!
>
> On Sat, 25 Aug 2012, Paul E. McKenney wrote:
> > Both Alpha patches should apply as-is back to 3.3, and should also fix
> > the problem. Could you please check this on the versions of interest?
>
> I just now tried them
On Sat, Aug 25, 2012 at 02:19:14AM +0100, Ben Hutchings wrote:
> On Fri, 2012-08-24 at 14:26 -0700, Paul E. McKenney wrote:
> > On Thu, Aug 23, 2012 at 04:58:24PM +0200, Frederic Weisbecker wrote:
> > > Hi,
> > >
> > > Changes since v1:
> > >
>
2012/10/5 Paul E. McKenney :
> On Thu, Oct 04, 2012 at 07:31:50AM -0700, Paul E. McKenney wrote:
>> On Thu, Oct 04, 2012 at 02:55:39AM +0100, Matthew Garrett wrote:
>> > On Wed, Oct 03, 2012 at 01:03:14PM -0700, Paul E. McKenney wrote:
>> >
>> > > That has not proven sufficient for me in the past,
good reason. vtime_account_system() OTOH is a no-op in
this config option.
A further optimization may consist in introducing a vtime_account_guest()
that directly calls account_guest_time().
Signed-off-by: Frederic Weisbecker
Cc: Tony Luck
Cc: Fenghua Yu
Cc: Benjamin Herrenschmidt
Cc: Paul
mize irq time accounting
as well in the future.
Signed-off-by: Frederic Weisbecker
Cc: Peter Zijlstra
Cc: Ingo Molnar
Cc: Thomas Gleixner
---
include/linux/hardirq.h | 82 +++
include/linux/kernel_stat.h |9 -
kernel/softirq.c|6
chset.
That, for sure, will inspire for even more cputime optimizations/cleanups.
Thanks.
PS: tested on x86 and ppc64 (checked reliability of times and /proc/stat).
But only built tested on s390 and ia64.
Frederic Weisbecker (3):
kvm: Directly account vtime to system on guest switch
cp
_irq_*()
this call is pointless to CONFIG_IRQ_TIME_ACCOUNTING.
To fix the confusion, change vtime_account() to irqtime_account_irq()
in CONFIG_IRQ_TIME_ACCOUNTING. This way we ensure future account_vtime()
calls won't waste useless cycles in the irqtime APIs.
Signed-off-by: Frederic Weisbecker
It's only there to call rcu_user_hooks_switch(). Let's
just call rcu_user_hooks_switch() directly, we don't need this
function in the middle.
Signed-off-by: Frederic Weisbecker
Cc: Josh Triplett
Cc: Peter Zijlstra
Cc: Richard Weinberger
Signed-off-by: Paul E. McKenney
---
Discourage distros from enabling CONFIG_RCU_USER_QS
because it brings overhead for no benefits yet.
It's not a useful feature on its own until we can
fully run an adaptive tickless kernel.
Signed-off-by: Frederic Weisbecker
---
init/Kconfig | 12
1 files changed, 12 inser
2012/11/28 Hakan Akkan :
> +static int check_drop_timer_duty(int cpu)
> +{
> + int curr_handler, prev_handler, new_handler;
> + int nrepeat = -1;
> + bool drop_recheck;
> +
> +repeat:
> + WARN_ON_ONCE(++nrepeat > 1);
> + drop_recheck = false;
> + curr_handler = c
2012/12/2 Stephen Rothwell :
> Well, these are a bit late (I expected Linus to release v3.7 today), but
> since Ingo has not piped in over the weekend, I have added them from today
> after the tip tree merge.
Yeah sorry to submit that so late. Those branches are in pending pull
requests to the -ti
2012/10/16 Tejun Heo :
> Hey, Frederic.
>
> On Mon, Oct 08, 2012 at 02:48:58PM +0200, Frederic Weisbecker wrote:
>> Yeah I missed this one.
>> Now the whole cgroup_attach_task() is clusteracy without the
>
> Clusteracy?
>
>> threadgroup lock anyway:
>>
2012/10/18 Tejun Heo :
> Hello, Frederic.
>
> On Thu, Oct 18, 2012 at 04:50:59PM +0200, Frederic Weisbecker wrote:
>> Ah right I was confused. Hmm, indeed we have a race here on
>> cgroup_fork(). How about using css_try_get() in cgroup_fork() and
>> refetch the parent
2012/10/12 Frederic Weisbecker :
> Hi,
>
> So here is a proposition on what we can do to make printk
> correctly working on a tickless CPU.
>
> Although it's targeted to be part of the adaptive tickmess
> implemetation, it's pretty standalone and generic and also
>
2012/10/19 Tejun Heo :
> On Fri, Oct 19, 2012 at 09:35:26AM -0400, Frederic Weisbecker wrote:
>> 2012/10/18 Tejun Heo :
>> > From d935a5d6832a264ce52f4257e176f4f96cbaf048 Mon Sep 17 00:00:00 2001
>> > From: Tejun Heo
>> > Date: Thu, 18 Oct 2012 17:40:30
Most of the time, x86 can trigger self-IPIs. Tell
irq work subsystem about it.
Signed-off-by: Frederic Weisbecker
Cc: Peter Zijlstra
Cc: Thomas Gleixner
Cc: Ingo Molnar
Cc: Andrew Morton
Cc: Steven Rostedt
Cc: Paul Gortmaker
---
arch/x86/include/asm/irq_work.h |4
1 files changed
We need some quick way to check if the CPU has stopped
its tick. This will be useful to implement the printk tick
using the irq work subsystem.
Signed-off-by: Frederic Weisbecker
Cc: Peter Zijlstra
Cc: Thomas Gleixner
Cc: Ingo Molnar
Cc: Andrew Morton
Cc: Steven Rostedt
Cc: Paul Gortmaker
irq work is supposed to work everywhere because of the irq work
hook in the generic timer tick function.
I might be missing something though...
Signed-off-by: Frederic Weisbecker
Cc: Peter Zijlstra
Cc: Thomas Gleixner
Cc: Ingo Molnar
Cc: Andrew Morton
Cc: Steven Rostedt
Cc: Paul Gortmaker
t the printk tick using irq work.
This subsystem takes care of the timer tick state and can
fix up accordingly.
Signed-off-by: Frederic Weisbecker
Cc: Peter Zijlstra
Cc: Thomas Gleixner
Cc: Ingo Molnar
Cc: Andrew Morton
Cc: Steven Rostedt
Cc: Paul Gortmaker
---
include/linux/printk.h |
If we enqueue a work while in dyntick idle mode and the arch doesn't
have self-IPI support, we may not find an opportunity to run the work
before a while.
In this case, exit the idle loop to re-evaluate irq_work_needs_cpu()
and restart the tick.
Signed-off-by: Frederic Weisbecker
Cc:
to avoid IPI storm
when we have lots of enqueuing of non-urgent works like klogd wakeup
in short period of time so this keeps the old printk_tick behaviour.
It also teaches irq_work to handle nohz mode.
Warning: only compile tested in x86 for now.
Frederic Weisbecker (8):
irq_work: Move
periods of
time.
Signed-off-by: Frederic Weisbecker
Cc: Peter Zijlstra
Cc: Thomas Gleixner
Cc: Ingo Molnar
Cc: Andrew Morton
Cc: Steven Rostedt
Cc: Paul Gortmaker
---
arch/x86/kernel/cpu/mcheck/mce.c |2 +-
arch/x86/kvm/pmu.c |2 +-
drivers/acpi
This prepares us to make printk working on nohz CPUs
using irq work.
Signed-off-by: Frederic Weisbecker
Cc: Peter Zijlstra
Cc: Thomas Gleixner
Cc: Ingo Molnar
Cc: Andrew Morton
Cc: Steven Rostedt
Cc: Paul Gortmaker
---
arch/alpha/include/asm/irq_work.h |5 -
arch/alpha/kernel
This optimization doesn't matter much. But this prepares the
arch headers that we need to add a new API in order to detect
when the arch can trigger self IPIs to implement the irq work.
This is necessary later to make printk working in nohz CPUs.
Signed-off-by: Frederic Weisbecker
Cc:
2012/10/20 Joe Perches :
> On Sat, 2012-10-20 at 12:22 -0400, Frederic Weisbecker wrote:
>> lets implement the printk tick using irq work.
>
> Hi Frederic.
>
> Can you redo this change please against -next in a few days?
>
> Andrew Morton picked up this series,
> ht
2012/10/19 Tejun Heo :
> Hello, Frederic.
>
> On Fri, Oct 19, 2012 at 03:44:20PM -0400, Frederic Weisbecker wrote:
>> > For -stable, I think it's better to revert. If you want to remove
>> > task_lock, let's do it for 3.8.
>>
>> I don't
2012/10/20 Frederic Weisbecker :
> 2012/10/19 Tejun Heo :
>> Hello, Frederic.
>>
>> On Fri, Oct 19, 2012 at 03:44:20PM -0400, Frederic Weisbecker wrote:
>>> > For -stable, I think it's better to revert. If you want to remove
>>> > task_lock, let
2012/10/21 Tejun Heo :
> Hello, Frederic.
>
> On Sat, Oct 20, 2012 at 02:21:43PM -0400, Frederic Weisbecker wrote:
>> CPU 0
>> CPU 1
>>
>> cgroup_task_migrate {
>> task_lock(p)
>> rcu_assign_pointer(tsk
2012/12/3 Alex Shi :
> It is impossible to miss a task allowed cpu in a eligible group.
>
> And since find_idlest_group only return a different group which
> excludes old cpu, it's also imporissible to find a new cpu same as old
> cpu.
Is it possible for weighted_cpuload() to return ULONG_MAX? If
2012/12/3 Alex Shi :
> There is 4 situations in the function:
> 1, no task allowed group;
> so min_load = ULONG_MAX, this_load = 0, idlest = NULL
> 2, only local group task allowed;
> so min_load = ULONG_MAX, this_load assigned, idlest = NULL
> 3, only non-local task group allowed;
2012/12/7 Alex Shi :
> On 12/07/2012 01:50 AM, Frederic Weisbecker wrote:
>> 2012/12/3 Alex Shi :
>>> It is impossible to miss a task allowed cpu in a eligible group.
>>>
>>> And since find_idlest_group only return a different group which
>>> excludes
2012/12/7 Alex Shi :
> On 12/07/2012 08:56 AM, Frederic Weisbecker wrote:
>> 2012/12/3 Alex Shi :
>>> There is 4 situations in the function:
>>> 1, no task allowed group;
>>> so min_load = ULONG_MAX, this_load = 0, idlest = NULL
>>> 2, only loca
the merge
>> window about to open up. Your patience is appreciated.
>
> I think it'd be easier for a single downstream
> maintainer to coordinate these patch sets sequencing.
>
> You or Andrew might be better than you and Andrew.
>
> There is a small patch to printk th
2012/12/8 Ingo Molnar :
>
> * Frederic Weisbecker wrote:
>
>> Ingo,
>>
>> Please pull the printk support in dynticks mode patches that can
>> be found at:
>>
>> git://git.kernel.org/pub/scm/linux/kernel/git/frederic/linux-dynticks.git
>> tags/
anges are available in the git repository at:
>> > git://git.kernel.org/pub/scm/linux/kernel/git/paulmck/linux-rcu.git
>> > rcu/next
>> >
>> > Thanx, Paul
>> >
>> > >
>> > Frederic Weisbecker (1):
>> > context
2012/11/13 Paul E. McKenney :
> Hello!
>
> I know of people using TINY_RCU, TREE_RCU, and TREE_PREEMPT_RCU, but I
> have not heard of anyone using TINY_PREEMPT_RCU for whom TREE_PREEMPT_RCU
> was not a viable option (in contrast, the people running Linux on
> tiny-memmory systems typically use TINY
2012/11/13 Josh Triplett :
> On Tue, Nov 13, 2012 at 02:12:27AM +0100, Frederic Weisbecker wrote:
>> 2012/11/13 Paul E. McKenney :
>> > Hello!
>> >
>> > I know of people using TINY_RCU, TREE_RCU, and TREE_PREEMPT_RCU, but I
>> > have not he
specific
case so that we can remove the underscore prefix on other
vtime functions.
Signed-off-by: Frederic Weisbecker
Cc: Peter Zijlstra
Cc: Ingo Molnar
Cc: Thomas Gleixner
Cc: Steven Rostedt
Cc: Paul Gortmaker
Cc: Tony Luck
Cc: Fenghua Yu
Cc: Benjamin Herrenschmidt
Cc: Paul Mackerras
Cc
All vtime implementations just flush the user time on process
tick. Consolidate that in generic code by calling a user time
accounting helper. This avoids an indirect call in ia64 and
prepare to also consolidate vtime context switch code.
Signed-off-by: Frederic Weisbecker
Cc: Peter Zijlstra
Cc
vtime_account() is only called from irq entry. irqs
are always disabled at this point so we can safely
remove the irq disabling guards on that function.
Signed-off-by: Frederic Weisbecker
Cc: Peter Zijlstra
Cc: Ingo Molnar
Cc: Thomas Gleixner
Cc: Steven Rostedt
Cc: Paul Gortmaker
Cc: Tony
g its own implementation.
Signed-off-by: Frederic Weisbecker
Cc: Peter Zijlstra
Cc: Ingo Molnar
Cc: Thomas Gleixner
Cc: Steven Rostedt
Cc: Paul Gortmaker
Cc: Tony Luck
Cc: Fenghua Yu
Cc: Benjamin Herrenschmidt
Cc: Paul Mackerras
Cc: Martin Schwidefsky
Cc: Heiko Carstens
---
arch/ia64/in
Hi,
While working on full dynticks, I realized some more cleanups needed to be
done. Here is it. If no comment arise, I'll send a pull request to Ingo
in a week.
Thanks.
Frederic Weisbecker (4):
vtime: Remove the underscore prefix invasion
vtime: Explicitly account pending user ti
2012/11/14 Steven Rostedt :
> On Wed, 2012-11-14 at 17:26 +0100, Frederic Weisbecker wrote:
>> Prepending irq-unsafe vtime APIs with underscores was actually
>> a bad idea as the result is a big mess in the API namespace that
>> is even waiting to be further extended. Also
2012/11/14 Steven Rostedt :
> On Wed, 2012-11-14 at 17:26 +0100, Frederic Weisbecker wrote:
>> vtime_account() is only called from irq entry. irqs
>> are always disabled at this point so we can safely
>> remove the irq disabling guards on that function.
>>
>> S
es the ad-hoc printk_tick()/printk_needs_cpu()
hooks and make it working even in dynticks mode.
Signed-off-by: Frederic Weisbecker
----
Frederic Weisbecker (7):
irq_work: Fix racy IRQ_WORK_BUSY flag setting
irq_work: Fix racy c
ng the expected ordering.
Changelog-heavily-inspired-by: Steven Rostedt
Signed-off-by: Frederic Weisbecker
Acked-by: Steven Rostedt
Cc: Peter Zijlstra
Cc: Ingo Molnar
Cc: Thomas Gleixner
Cc: Andrew Morton
Cc: Paul Gortmaker
Cc: Anish Kumar
---
kernel/irq_work.c |5 -
1 files change
We need some quick way to check if the CPU has stopped
its tick. This will be useful to implement the printk tick
using the irq work subsystem.
Signed-off-by: Frederic Weisbecker
Cc: Peter Zijlstra
Cc: Thomas Gleixner
Cc: Ingo Molnar
Cc: Andrew Morton
Cc: Steven Rostedt
Cc: Paul Gortmaker
Don't stop the tick if we have pending irq works on the
queue, otherwise if the arch can't raise self-IPIs, we may not
find an opportunity to execute the pending works for a while.
Signed-off-by: Frederic Weisbecker
Cc: Peter Zijlstra
Cc: Thomas Gleixner
Cc: Ingo Molnar
Cc: Andrew
by speculating about the value we wish to be
in the work->flags but we only make any conclusion after the value
returned by the cmpxchg() call that either claims the work or let
the current owner handle the pending work for us.
Changelog-heavily-inspired-by: Steven Rostedt
Signed-off-by: Fr
t the printk tick using a lazy irq work.
This subsystem takes care of the timer tick state and can
fix up accordingly.
Signed-off-by: Frederic Weisbecker
Cc: Peter Zijlstra
Cc: Thomas Gleixner
Cc: Ingo Molnar
Cc: Andrew Morton
Cc: Steven Rostedt
Cc: Paul Gortmaker
---
include/linux/printk.h |
e.
This is going to be a benefit for non-urgent enqueuers
(like printk in the future) that may prefer not to raise
an IPI storm in case of frequent enqueuing on short periods
of time.
Signed-off-by: Frederic Weisbecker
Cc: Peter Zijlstra
Cc: Thomas Gleixner
Cc: Ingo Molnar
Cc: Andrew Morton
Cc:
irq work can run on any arch even without IPI
support because of the hook on update_process_times().
So lets remove HAVE_IRQ_WORK because it doesn't reflect
any backend requirement.
Signed-off-by: Frederic Weisbecker
Cc: Peter Zijlstra
Cc: Thomas Gleixner
Cc: Ingo Molnar
Cc: Andrew M
2012/11/15 Steven Rostedt :
> On Wed, 2012-11-14 at 21:37 +0100, Frederic Weisbecker wrote:
>> diff --git a/kernel/time/tick-sched.c b/kernel/time/tick-sched.c
>> index f249e8c..822d757 100644
>> --- a/kernel/time/tick-sched.c
>> +++ b/kernel/time/tick-sched.c
>> @
2012/11/15 Frederic Weisbecker :
> ->
> CPU that offilines CPU offlining
> -
> -
> cpu_down() {
> __stop_machine(take_cpu_down)
>
> take_cpu_down() {
>
> __cpu_disable
2012/10/29 Steven Rostedt :
> On Mon, 2012-10-29 at 14:28 +0100, Frederic Weisbecker wrote:
>> On irq work initialization, let the user choose to define it
>> as "lazy" or not. "Lazy" means that we don't want to send
>> an IPI (provided the arch c
ng the expected ordering.
Changelog-heavily-inspired-by: Steven Rostedt
Signed-off-by: Frederic Weisbecker
Cc: Peter Zijlstra
Cc: Ingo Molnar
Cc: Thomas Gleixner
Cc: Andrew Morton
Cc: Steven Rostedt
Cc: Paul Gortmaker
Cc: Anish Kumar
---
kernel/irq_work.c |5 -
1 files changed, 4 i
e.
This is going to be a benefit for non-urgent enqueuers
(like printk in the future) that may prefer not to raise
an IPI storm in case of frequent enqueuing on short periods
of time.
Signed-off-by: Frederic Weisbecker
Cc: Peter Zijlstra
Cc: Thomas Gleixner
Cc: Ingo Molnar
Cc: Andrew Morton
Cc:
scm/linux/kernel/git/frederic/linux-dynticks.git
nohz/printk-v4
Thanks.
Frederic Weisbecker (7):
irq_work: Fix racy IRQ_WORK_BUSY flag setting
irq_work: Fix racy check on work pending flag
irq_work: Remove CONFIG_HAVE_IRQ_WORK
nohz: Add API to check tick state
irq_work: Don't sto
irq work can run on any arch even without IPI
support because of the hook on update_process_times().
So lets remove HAVE_IRQ_WORK because it doesn't reflect
any backend requirement.
Signed-off-by: Frederic Weisbecker
Cc: Peter Zijlstra
Cc: Thomas Gleixner
Cc: Ingo Molnar
Cc: Andrew M
t the printk tick using a lazy irq work.
This subsystem takes care of the timer tick state and can
fix up accordingly.
Signed-off-by: Frederic Weisbecker
Cc: Peter Zijlstra
Cc: Thomas Gleixner
Cc: Ingo Molnar
Cc: Andrew Morton
Cc: Steven Rostedt
Cc: Paul Gortmaker
---
include/linux/printk.h |
Don't stop the tick if we have pending irq works on the
queue, otherwise if the arch can't raise self-IPIs, we may not
find an opportunity to execute the pending works for a while.
Signed-off-by: Frederic Weisbecker
Cc: Peter Zijlstra
Cc: Thomas Gleixner
Cc: Ingo Molnar
Cc: Andrew
We need some quick way to check if the CPU has stopped
its tick. This will be useful to implement the printk tick
using the irq work subsystem.
Signed-off-by: Frederic Weisbecker
Cc: Peter Zijlstra
Cc: Thomas Gleixner
Cc: Ingo Molnar
Cc: Andrew Morton
Cc: Steven Rostedt
Cc: Paul Gortmaker
by speculating about the value we wish to be
in the work->flags but we only make any conclusion after the value
returned by the cmpxchg() call that either claims the work or let
the current owner handle the pending work for us.
Changelog-heavily-inspired-by: Steven Rostedt
Signed-off-by: Fre
2012/12/26 Li Zhong :
> On Thu, 2012-12-20 at 19:32 +0100, Frederic Weisbecker wrote:
>> diff --git a/init/Kconfig b/init/Kconfig
>> index 60579d6..a64b3e8 100644
>> --- a/init/Kconfig
>> +++ b/init/Kconfig
>> @@ -340,7 +340,9 @@ config TICK_CPU_ACCOUNTING
2012/12/26 Namhyung Kim :
> Hi Frederic,
>
> On Thu, 20 Dec 2012 19:33:07 +0100, Frederic Weisbecker wrote:
>> When a CPU is in full dynticks mode, try to switch
>> it to nohz mode from the interrupt exit path if it is
>> running a single non-idle task.
>>
>>
plan to answer you but for now I'm just
a bit backlogged due to holidays.
Happy new year!
---
Frederic Weisbecker (35):
irq_work: Fix racy IRQ_WORK_BUSY flag setting
irq_work: Fix racy check on work pending flag
irq_work: Remove CONFIG_HAVE_IRQ_WORK
nohz: Add
true native
virtual based cputime accounting which hooks on low level code and use
a cpu hardware clock. Precision is not the goal of this though.
- There is probably more overhead than a native virtual based cputime
accounting. But this relies on hooks that are already set anyway.
Signed-off-by: Fre
This is in preparation for the full dynticks feature. While
remotely reading the cputime of a task running in a full
dynticks CPU, we'll need to do some extra-computation. This
way we can account the time it spent tickless in userspace
since its last cputime snapshot.
Signed-off-by: Fre
z
CPU running. But let's use this KISS solution for now.
Signed-off-by: Frederic Weisbecker
Cc: Alessio Igor Bogani
Cc: Andrew Morton
Cc: Chris Metcalf
Cc: Christoph Lameter
Cc: Geoff Levand
Cc: Gilad Ben Yossef
Cc: Hakan Akkan
Cc: Ingo Molnar
Cc: Paul E. McKenney
Cc: Paul Gortmaker
When a CPU in full dynticks mode doesn't respond to complete
a grace period, issue it a specific IPI so that it restarts
the tick and chases a quiescent state.
Signed-off-by: Frederic Weisbecker
Cc: Alessio Igor Bogani
Cc: Andrew Morton
Cc: Chris Metcalf
Cc: Christoph Lameter
Cc:
Because the sched_class::put_prev_task() callback of rt and fair
classes are referring to the rq clock to update their runtime
statistics. A CPU running in tickless mode may carry a stale value.
We need to update it there.
Signed-off-by: Frederic Weisbecker
Cc: Alessio Igor Bogani
Cc: Andrew
manually in case the CPU runs
tickless because ttwu_do_wakeup() calls check_preempt_wakeup().
Signed-off-by: Frederic Weisbecker
Cc: Alessio Igor Bogani
Cc: Andrew Morton
Cc: Chris Metcalf
Cc: Christoph Lameter
Cc: Geoff Levand
Cc: Gilad Ben Yossef
Cc: Hakan Akkan
Cc: Ingo Molnar
Cc: Paul E
s_cpu is in dyntick-idle mode?
Signed-off-by: Frederic Weisbecker
Cc: Alessio Igor Bogani
Cc: Andrew Morton
Cc: Chris Metcalf
Cc: Christoph Lameter
Cc: Geoff Levand
Cc: Gilad Ben Yossef
Cc: Hakan Akkan
Cc: Ingo Molnar
Cc: Paul E. McKenney
Cc: Paul Gortmaker
Cc: Peter Zijlstra
Cc: S
On a full dynticks CPU, we want the RCU callbacks to be
offlined to another CPU, otherwise we need to keep
the tick to wait for the grace period completion.
Ensure the full dynticks CPU is also an rcu_nocb one.
Signed-off-by: Frederic Weisbecker
Cc: Alessio Igor Bogani
Cc: Andrew Morton
Cc
igned-off-by: Frederic Weisbecker
Cc: Alessio Igor Bogani
Cc: Andrew Morton
Cc: Chris Metcalf
Cc: Christoph Lameter
Cc: Geoff Levand
Cc: Gilad Ben Yossef
Cc: Hakan Akkan
Cc: Ingo Molnar
Cc: Paul E. McKenney
Cc: Paul Gortmaker
Cc: Peter Zijlstra
Cc: Steven Rostedt
Cc: Thomas Gle
for local callbacks?
Signed-off-by: Frederic Weisbecker
Cc: Alessio Igor Bogani
Cc: Andrew Morton
Cc: Chris Metcalf
Cc: Christoph Lameter
Cc: Geoff Levand
Cc: Gilad Ben Yossef
Cc: Hakan Akkan
Cc: Ingo Molnar
Cc: Paul E. McKenney
Cc: Paul Gortmaker
Cc: Peter Zijlstra
Cc: Steven Rostedt
ot even aware of any out of tree
user.
Let's remove it.
Signed-off-by: Frederic Weisbecker
Cc: Alessio Igor Bogani
Cc: Andrew Morton
Cc: Chris Metcalf
Cc: Christoph Lameter
Cc: Geoff Levand
Cc: Gilad Ben Yossef
Cc: Hakan Akkan
Cc: Ingo Molnar
Cc: Paul E. McKenney
Cc: Paul Gortmake
until we provide a way for the user to tune that
policy. A CPU mask affinity for non pinned timers could be such
a solution.
Original-patch-by: Thomas Gleixner
Signed-off-by: Frederic Weisbecker
Cc: Alessio Igor Bogani
Cc: Andrew Morton
Cc: Chris Metcalf
Cc: Christoph Lameter
Cc: Geoff Levand
Not for merge, just for debugging.
Signed-off-by: Frederic Weisbecker
Cc: Alessio Igor Bogani
Cc: Andrew Morton
Cc: Chris Metcalf
Cc: Christoph Lameter
Cc: Geoff Levand
Cc: Gilad Ben Yossef
Cc: Hakan Akkan
Cc: Ingo Molnar
Cc: Paul E. McKenney
Cc: Paul Gortmaker
Cc: Peter Zijlstra
Cc
]
CHECKME: OTOH we don't want to handle a locally started
grace period, this should be offloaded for rcu_nocb CPUs.
What we want is to be kicked if we stay dynticks in the kernel
for too long (ie: to report a quiescent state).
rcu_pending() is perhaps an overkill just for that.
Signed-o
401 - 500 of 4581 matches
Mail list logo