heir calls are
ignored when preempt= isn't passed.
Signed-off-by: Peter Zijlstra (Intel)
Cc: Thomas Gleixner
Cc: Mel Gorman
Cc: Ingo Molnar
Cc: Michal Hocko
Cc: Paul E. McKenney
[branch might_resched() directly to __cond_resched(), only define static
calls when PREEMPT_DYNAMIC]
Signed-o
h provided wrapper, if any.
Signed-off-by: Peter Zijlstra (Intel)
Cc: Thomas Gleixner
Cc: Mel Gorman
Cc: Ingo Molnar
Cc: Michal Hocko
Cc: Paul E. McKenney
[only define static calls when PREEMPT_DYNAMIC, make it less dependent
on x86 with __preempt_schedule_func()]
Signed-off-by: Frederic Weisbecker
Molnar
Cc: Michal Hocko
Cc: Paul E. McKenney
Signed-off-by: Frederic Weisbecker
---
arch/x86/kernel/static_call.c | 17 +++--
include/linux/static_call.h | 2 ++
kernel/static_call.c | 5 +
3 files changed, 22 insertions(+), 2 deletions(-)
diff --git a/arch/x86
Molnar
Cc: Michal Hocko
Cc: Paul E. McKenney
Signed-off-by: Frederic Weisbecker
---
include/linux/static_call.h | 23 ---
include/linux/static_call_types.h | 29 +
2 files changed, 29 insertions(+), 23 deletions(-)
diff --git a/include/linux
() /
__preempt_schedule_notrace_function()).
Suggested-by: Peter Zijlstra
Signed-off-by: Michal Hocko
Cc: Peter Zijlstra (Intel)
Cc: Thomas Gleixner
Cc: Mel Gorman
Cc: Ingo Molnar
Cc: Paul E. McKenney
Signed-off-by: Frederic Weisbecker
---
.../admin-guide/kernel-parameters.txt | 7 +++
arch/Kconfig
. But functions returning a actual value
don't have an equivalent yet.
Provide DEFINE_STATIC_CALL_RET0() to solve this situation.
Signed-off-by: Frederic Weisbecker
Cc: Thomas Gleixner
Cc: Mel Gorman
Cc: Ingo Molnar
Cc: Michal Hocko
Cc: Paul E. McKenney
Cc: Peter Zijlstra (Intel)
---
include
some static_call declarations to the type headers
Frederic Weisbecker (1):
static_call: Provide DEFINE_STATIC_CALL_RET0()
Michal Hocko (1):
preempt: Introduce CONFIG_PREEMPT_DYNAMIC
Documentation/admin-guide/kernel-parameters.txt | 7 ++
arch/Kconfig
On Tue, Nov 10, 2020 at 11:48:33AM +0100, Peter Zijlstra wrote:
> On Tue, Nov 10, 2020 at 11:39:09AM +0100, Peter Zijlstra wrote:
> > Subject: static_call: EXPORT_STATIC_CALL_TRAMP()
> > From: Peter Zijlstra
> > Date: Tue Nov 10 11:37:48 CET 2020
> >
> > For when we want to allow modules to call
On Mon, Jan 11, 2021 at 01:25:59PM +0100, Peter Zijlstra wrote:
> On Sat, Jan 09, 2021 at 03:05:34AM +0100, Frederic Weisbecker wrote:
> > The idle loop has several need_resched() checks that make sure we don't
> > miss a rescheduling request. This means that any wake up perfor
On Mon, Jan 11, 2021 at 01:08:08PM +0100, Peter Zijlstra wrote:
> On Sat, Jan 09, 2021 at 03:05:33AM +0100, Frederic Weisbecker wrote:
> > Following the idle loop model, cleanly check for pending rcuog wakeup
> > before the last rescheduling point on resuming to user mode. This
On Mon, Jan 11, 2021 at 01:04:24PM +0100, Peter Zijlstra wrote:
> > +static DEFINE_PER_CPU(struct irq_work, late_wakeup_work) =
> > + IRQ_WORK_INIT(late_wakeup_func);
> > +
> > /**
> > * rcu_user_enter - inform RCU that we are resuming userspace.
> > *
> > @@ -692,9 +704,17 @@ noinstr void r
On Sun, Jan 10, 2021 at 09:13:18PM -0800, Paul E. McKenney wrote:
> On Mon, Jan 11, 2021 at 01:40:14AM +0100, Frederic Weisbecker wrote:
> > On Sat, Jan 09, 2021 at 03:05:33AM +0100, Frederic Weisbecker wrote:
> > > Following the idle loop model, cleanly check for pending rcuog w
On Sat, Jan 09, 2021 at 03:05:33AM +0100, Frederic Weisbecker wrote:
> Following the idle loop model, cleanly check for pending rcuog wakeup
> before the last rescheduling point on resuming to user mode. This
> way we can avoid to do it from rcu_user_enter() with the last resort
> s
On Sat, Jan 09, 2021 at 10:03:33AM +0100, Greg KH wrote:
> On Sat, Jan 09, 2021 at 03:05:29AM +0100, Frederic Weisbecker wrote:
> > Signed-off-by: Frederic Weisbecker
> > Cc: Paul E. McKenney
> > Cc: Rafael J. Wysocki
> > Cc: Peter Zijlstra
> > Cc: Tho
Following the idle loop model, cleanly check for pending rcuog wakeup
before the last rescheduling point on resuming to user mode. This
way we can avoid to do it from rcu_user_enter() with the last resort
self-IPI hack that enforces rescheduling.
Signed-off-by: Frederic Weisbecker
Cc: Peter
The last rescheduling opportunity while resuming to user is in
exit_to_user_mode_loop(). This means that any wake up performed on
the local runqueue after this point is going to have its rescheduling
silently ignored.
Perform sanity checks to report these situations.
Signed-off-by: Frederic
Enqueuing a local timer after the tick has been stopped will result in
the timer being ignored until the next random interrupt.
Perform sanity checks to report these situations.
Signed-off-by: Frederic Weisbecker
Cc: Peter Zijlstra
Cc: Thomas Gleixner
Cc: Ingo Molnar
Cc: Paul E. McKenney
Cc
h rcu kthreads awaken from rcu_idle_enter() for example.
Perform sanity checks to report these situations.
Signed-off-by: Frederic Weisbecker
Cc: Peter Zijlstra
Cc: Thomas Gleixner
Cc: Ingo Molnar
Cc: Paul E. McKenney
Cc: Rafael J. Wysocki
---
include/linux/sched.h | 11 +++
kernel/
explicit
need_resched() check upon resume.
Reported-by: Paul E. McKenney
Fixes: 96d3fd0d315a (rcu: Break call_rcu() deadlock involving scheduler and
perf)
Cc: sta...@vger.kernel.org
Cc: Rafael J. Wysocki
Cc: Peter Zijlstra
Cc: Thomas Gleixner
Cc: Ingo Molnar
Signed-off-by: Frederic Weisbecker
Signed-off-by: Frederic Weisbecker
Cc: Paul E. McKenney
Cc: Rafael J. Wysocki
Cc: Peter Zijlstra
Cc: Thomas Gleixner
Cc: Ingo Molnar
---
kernel/rcu/tree.c | 1 -
1 file changed, 1 deletion(-)
diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
index 40e5e3dd253e..fef90c467670 100644
--- a
scheduler and
perf)
Cc: sta...@vger.kernel.org
Cc: Rafael J. Wysocki
Cc: Peter Zijlstra
Cc: Thomas Gleixner
Cc: Ingo Molnar
Signed-off-by: Frederic Weisbecker
---
include/linux/rcupdate.h | 2 ++
kernel/rcu/tree.c| 3 ---
kernel/rcu/tree_plugin.h | 5 +
kernel/sched/idle.c | 3
Deferred wakeup of rcuog kthreads upon RCU idle mode entry is going to
be handled differently whether initiated by idle, user or guest. Prepare
with pulling that control up to rcu_eqs_enter() callers.
Signed-off-by: Frederic Weisbecker
Cc: Paul E. McKenney
Cc: Rafael J. Wysocki
Cc: Peter
to happen
again.
Only lightly tested so far.
git://git.kernel.org/pub/scm/linux/kernel/git/frederic/linux-dynticks.git
sched/idle-v3
HEAD: d95fc510e804a5c4658a823ff12d9caba1d906c7
Thanks,
Frederic
---
Frederic Weisbecker (8):
rcu: Remove superfluous rdp fetch
rcu: Pull
On Tue, Jan 05, 2021 at 03:25:10PM -0800, Paul E. McKenney wrote:
> On Tue, Jan 05, 2021 at 10:55:03AM +0100, Peter Zijlstra wrote:
> > On Mon, Jan 04, 2021 at 04:20:55PM +0100, Frederic Weisbecker wrote:
> > > Entering RCU idle mode may cause a deferred wake up of an RCU NOC
On Tue, Jan 05, 2021 at 10:55:03AM +0100, Peter Zijlstra wrote:
> On Mon, Jan 04, 2021 at 04:20:55PM +0100, Frederic Weisbecker wrote:
> > Entering RCU idle mode may cause a deferred wake up of an RCU NOCB_GP
> > kthread (rcuog) to be serviced.
> >
> > Usually a wake
: Ingo Molnar
Signed-off-by: Frederic Weisbecker
---
arch/arm/mach-imx/cpuidle-imx6q.c | 8 +++-
1 file changed, 7 insertions(+), 1 deletion(-)
diff --git a/arch/arm/mach-imx/cpuidle-imx6q.c
b/arch/arm/mach-imx/cpuidle-imx6q.c
index 094337dc1bc7..1115f4dc6d1d 100644
--- a/arch/arm/mach-imx
(ACPI: processor: Take over RCU-idle for C3-BM idle)
Cc: sta...@vger.kernel.org
Cc: Len Brown
Cc: Peter Zijlstra
Cc: Thomas Gleixner
Cc: Ingo Molnar
Signed-off-by: Frederic Weisbecker
---
drivers/acpi/processor_idle.c | 10 --
1 file changed, 8 insertions(+), 2 deletions(-)
diff --git a/drivers
: Paul E. McKenney
Reviewed-by: Rafael J. Wysocki
Fixes: 96d3fd0d315a (rcu: Break call_rcu() deadlock involving scheduler and
perf)
Cc: sta...@vger.kernel.org
Cc: Peter Zijlstra
Cc: Thomas Gleixner
Cc: Ingo Molnar
Signed-off-by: Frederic Weisbecker
---
kernel/sched/idle.c | 18
ixner
Cc: Ingo Molnar
Signed-off-by: Frederic Weisbecker
---
drivers/cpuidle/cpuidle.c | 33 +
1 file changed, 25 insertions(+), 8 deletions(-)
diff --git a/drivers/cpuidle/cpuidle.c b/drivers/cpuidle/cpuidle.c
index ef2ea1b12cd8..4cc1ba49ce05 100644
--- a/dr
: c246718af0112c8624ec9c46a85bf0ef1562e050
Thanks,
Frederic
---
Frederic Weisbecker (4):
sched/idle: Fix missing need_resched() check after rcu_idle_enter()
cpuidle: Fix missing need_resched() check after rcu_idle_enter()
ARM: imx6q: Fix missing need_resched() check after
On Mon, Jan 04, 2021 at 11:37:36AM +1100, Stephen Rothwell wrote:
> Hi all,
>
> After merging the rcu tree, today's linux-next build (arm
> multi_v7_defconfig) failed like this:
>
> arch/arm/mach-imx/cpuidle-imx6q.c: In function 'imx6q_enter_wait':
> arch/arm/mach-imx/cpuidle-imx6q.c:32:7: error:
On Tue, Dec 29, 2020 at 02:12:31PM +, Qais Yousef wrote:
> On 12/29/20 14:41, Frederic Weisbecker wrote:
> > > > -void vtime_account_irq(struct task_struct *tsk)
> > > > +void vtime_account_irq(struct task_struct *tsk, unsigned int offset)
> > > &g
On Wed, Dec 16, 2020 at 08:59:30AM -0800, Paul E. McKenney wrote:
> On Fri, Nov 13, 2020 at 01:13:15PM +0100, Frederic Weisbecker wrote:
> >
> > Frederic Weisbecker (19):
> > rcu/nocb: Turn enabled/offload states into a common flag
> > rcu/nocb: Provide basi
On Mon, Dec 28, 2020 at 02:15:29AM +, Qais Yousef wrote:
> Hi Frederic
>
> On 12/02/20 12:57, Frederic Weisbecker wrote:
> > @@ -66,9 +68,9 @@ void irqtime_account_irq(struct task_struct *curr)
> > * in that case, so as not to confuse scheduler with a special task
&
On Mon, Dec 28, 2020 at 11:47:48AM +0800, chenshiyan wrote:
> From: "shiyan.csy"
>
> exit nohz idle before invoking softirq, or it maymiss
> some ticks during softirq.
>
> Signed-off-by: Shiyan Chen
> ---
> kernel/softirq.c | 9 +++--
> 1 file changed, 7 insertions(+), 2 deletions(-)
>
>
. McKenney
Fixes: 96d3fd0d315a (rcu: Break call_rcu() deadlock involving scheduler and
perf)
Cc: sta...@vger.kernel.org
Cc: Peter Zijlstra
Cc: Rafael J. Wysocki
Cc: Thomas Gleixner
Cc: Ingo Molnar
Signed-off-by: Frederic Weisbecker
---
kernel/sched/idle.c | 18 --
1 file changed, 12
Molnar
Signed-off-by: Frederic Weisbecker
---
arch/arm/mach-imx/cpuidle-imx6q.c | 7 ++-
1 file changed, 6 insertions(+), 1 deletion(-)
diff --git a/arch/arm/mach-imx/cpuidle-imx6q.c
b/arch/arm/mach-imx/cpuidle-imx6q.c
index 094337dc1bc7..31a60d257d3d 100644
--- a/arch/arm/mach-imx/cpuidle
C3-BM idle)
Cc: sta...@vger.kernel.org
Cc: Len Brown
Cc: Peter Zijlstra
Cc: Rafael J. Wysocki
Cc: Thomas Gleixner
Cc: Ingo Molnar
Signed-off-by: Frederic Weisbecker
---
drivers/acpi/processor_idle.c | 10 --
1 file changed, 8 insertions(+), 2 deletions(-)
diff --git a/drivers
Ingo Molnar
Signed-off-by: Frederic Weisbecker
---
drivers/cpuidle/cpuidle.c | 33 +
1 file changed, 25 insertions(+), 8 deletions(-)
diff --git a/drivers/cpuidle/cpuidle.c b/drivers/cpuidle/cpuidle.c
index ef2ea1b12cd8..4cc1ba49ce05 100644
--- a/drivers/cp
b15d235
Thanks,
Frederic
---
Frederic Weisbecker (4):
sched/idle: Fix missing need_resched() check after rcu_idle_enter()
cpuidle: Fix missing need_resched() check after rcu_idle_enter()
ARM: imx6q: Fix missing need_resched() check after rcu_idle_enter()
ACPI:
On Thu, Dec 17, 2020 at 02:51:58PM +0800, Yunfeng Ye wrote:
>
>
> On 2020/12/15 22:47, Frederic Weisbecker wrote:
> > On Tue, Dec 15, 2020 at 08:06:34PM +0800, Yunfeng Ye wrote:
> >> The idle_exittime field of tick_sched is used to record the time when
> >> the i
On Tue, Dec 15, 2020 at 09:04:07PM -0800, Paul E. McKenney wrote:
> Hello, Frederic,
>
> Are you seeing rcutorture writer stalls? Please see attached for an
> example from testing, search for "Call Trace". I am running an overnight
> test, which should get me some idea of frequency. My thought
On Fri, Nov 13, 2020 at 01:13:32PM +0100, Frederic Weisbecker wrote:
> RCU needs to check if the cpu hotplug lock is held, in the middle of
> other conditions to check the sanity of RCU-nocb. Provide a helper for
> that.
>
> Signed-off-by: Frederic Weisbecker
> Cc: Paul E. Mc
On Tue, Dec 15, 2020 at 08:06:34PM +0800, Yunfeng Ye wrote:
> The idle_exittime field of tick_sched is used to record the time when
> the idle state was left. but currently the idle_exittime is updated in
> the function tick_nohz_restart_sched_tick(), which is not always in idle
> state when nohz_f
The following commit has been merged into the core/rcu branch of tip:
Commit-ID: e3771c850d3b9349b48449c9a91c98944a08650c
Gitweb:
https://git.kernel.org/tip/e3771c850d3b9349b48449c9a91c98944a08650c
Author:Frederic Weisbecker
AuthorDate:Mon, 21 Sep 2020 14:43:40 +02:00
On Sat, Dec 12, 2020 at 01:16:12AM +0100, Thomas Gleixner wrote:
> On Fri, Dec 11 2020 at 23:21, Frederic Weisbecker wrote:
> > On Sun, Dec 06, 2020 at 10:12:54PM +0100, Thomas Gleixner wrote:
> >> tick_handover_do_timer() which is invoked when a CPU is unplugged has a
>
Signed-off-by: Thomas Gleixner
Acked-by: Frederic Weisbecker
Thanks!
On Sun, Dec 06, 2020 at 10:12:54PM +0100, Thomas Gleixner wrote:
> tick_handover_do_timer() which is invoked when a CPU is unplugged has a
> check for cpumask_first(cpu_online_mask) when it tries to hand over the
> tick update duty.
>
> Checking the result of cpumask_first() there is pointless bec
he timekeeping stale until I realized that stop_machine() is running at that
time. Might be worth adding a comment about that.
Also why not just setting it to TICK_DO_TIMER_NONE and be done with it? Perhaps
to avoid that all the CPUs to compete and contend on jiffies update after stop
machine?
If so:
Reviewed-by: Frederic Weisbecker
Thanks.
_ONCE()
> with smp_load_acquire() / smp_store_release().
>
> On 32bit problem #2 is addressed by protecting the quick check with the
> jiffies sequence counter. The load and stores can be plain because the
> sequence count mechanics provides the required barriers already.
>
> Signed-off-by: Thomas Gleixner
Looks very good! Thanks!
Reviewed-by: Frederic Weisbecker
On Thu, Dec 10, 2020 at 04:46:38PM -0800, Paul E. McKenney wrote:
> > diff --git a/kernel/softirq.c b/kernel/softirq.c
> > index 09229ad82209..7d558cb7a037 100644
> > --- a/kernel/softirq.c
> > +++ b/kernel/softirq.c
> > @@ -650,7 +650,9 @@ static void run_ksoftirqd(unsigned int cpu)
> >
On Thu, Dec 10, 2020 at 01:17:56PM -0800, Paul E. McKenney wrote:
> And please see attached. Lots of output, in fact, enough that it
> was still dumping when the second instance happened.
Thanks!
So the issue is that ksoftirqd is parked on CPU down with vectors
still pending. Either:
1) Ksoftir
Hi,
On Wed, Nov 18, 2020 at 09:52:18AM -0800, Paul E. McKenney wrote:
> Hello, Frederic,
>
> Here is the last few months' pile of warnings from rcutorture runs.
>
> Thanx, Paul
>
> [ 255.098527] NOHZ tick-stop error: Non-RCU local softirq w
On Wed, Nov 18, 2020 at 09:54:20AM -0800, Paul E. McKenney wrote:
> On Wed, Nov 18, 2020 at 09:52:18AM -0800, Paul E. McKenney wrote:
> > Hello, Frederic,
> >
> > Here is the last few months' pile of warnings from rcutorture runs.
>
> And this time with scenario names. ;-)
>
>
On Tue, Dec 08, 2020 at 10:24:09AM -0800, Paul E. McKenney wrote:
> > It reduces the code scope running with BH disabled.
> > Also narrowing down helps to understand what it actually protects.
>
> I thought that you would call out unnecessarily delaying other softirq
> handlers. ;-)
>
> But if s
On Tue, Dec 08, 2020 at 09:19:27AM -0800, Paul E. McKenney wrote:
> On Tue, Dec 08, 2020 at 04:54:57PM +0100, Frederic Weisbecker wrote:
> > On Tue, Dec 08, 2020 at 06:58:10AM -0800, Paul E. McKenney wrote:
> > > Hello, Frederic,
> > >
> > > Boqun just asked
On Tue, Dec 08, 2020 at 06:58:10AM -0800, Paul E. McKenney wrote:
> Hello, Frederic,
>
> Boqun just asked if RCU callbacks ran in BH-disabled context to avoid
> concurrent execution of the same callback. Of course, this raises the
> question of whether a self-posting callback can have two instanc
Hi Boqun Feng,
On Tue, Dec 08, 2020 at 10:41:31AM +0800, Boqun Feng wrote:
> Hi Frederic,
>
> On Fri, Nov 13, 2020 at 01:13:15PM +0100, Frederic Weisbecker wrote:
> > This keeps growing up. Rest assured, most of it is debug code and sanity
> > checks.
> >
> >
ding warnings, which would happen when the task which holds
> + * softirq_ctrl::lock was the only running task on the CPU and blocks on
> + * some other lock.
> + */
> +bool local_bh_blocked(void)
> +{
> + return this_cpu_read(softirq_ctrl.cnt) != 0;
__this_cpu_read()
Reviewed-by: Frederic Weisbecker
local_unlock(&softirq_ctrl.lock);
> + }
> +}
> +
> +static inline bool should_wake_ksoftirqd(void)
> +{
> + return !this_cpu_read(softirq_ctrl.cnt);
And that too.
Other than these boring details:
Reviewed-by: Frederic Weisbecker
Thanks.
On Fri, Dec 04, 2020 at 06:01:55PM +0100, Thomas Gleixner wrote:
> +void __local_bh_disable_ip(unsigned long ip, unsigned int cnt)
> +{
> + unsigned long flags;
> + int newcnt;
> +
> + WARN_ON_ONCE(in_hardirq());
> +
> + /* First entry of a task into a BH disabled section? */
> +
in one go.
>
> Signed-off-by: Thomas Gleixner
Reviewed-by: Frederic Weisbecker
%gs:0x0(%rip),%eax
> - 9a2: 25 ff ff ff 7f and$0x7fff,%eax
> + 9a2: 25 00 ff ff 00 and$0x00,%eax
>
> Reported-by: Sebastian Andrzej Siewior
> Signed-off-by: Thomas Gleixner
Reviewed-by: Frederic Weisbecker
Also I'm seei
local lock can be preempted so its required to
> keep track of the nest count per task.
>
> Add a RT only counter to task struct and adjust the relevant macros in
> preempt.h.
>
> Signed-off-by: Thomas Gleixner
Reviewed-by: Frederic Weisbecker
On Mon, Dec 07, 2020 at 01:25:13PM +0100, Peter Zijlstra wrote:
> On Mon, Dec 07, 2020 at 02:10:13AM +0100, Frederic Weisbecker wrote:
> > On Sun, Dec 06, 2020 at 10:40:07PM +0100, Thomas Gleixner wrote:
> > > syzbot reported KCSAN data races vs. timer_base::timer_running b
On Mon, Dec 07, 2020 at 01:57:25AM +0100, Thomas Gleixner wrote:
> On Mon, Dec 07 2020 at 01:23, Frederic Weisbecker wrote:
> >> --- a/kernel/sched/cputime.c
> >> +++ b/kernel/sched/cputime.c
> >> @@ -60,7 +60,7 @@ void irqtime_account_irq(struct task_str
On Sun, Dec 06, 2020 at 10:40:07PM +0100, Thomas Gleixner wrote:
> syzbot reported KCSAN data races vs. timer_base::timer_running being set to
> NULL without holding base::lock in expire_timers().
>
> This looks innocent and most reads are clearly not problematic but for a
> non-RT kernel it's com
On Fri, Dec 04, 2020 at 06:01:53PM +0100, Thomas Gleixner wrote:
> vtime_account_irq and irqtime_account_irq() base checks on preempt_count()
> which fails on RT because preempt_count() does not contain the softirq
> accounting which is seperate on RT.
>
> These checks do not need the full preempt
The following commit has been merged into the irq/core branch of tip:
Commit-ID: 2b91ec9f551b56751cde48792f1c0a1130358844
Gitweb:
https://git.kernel.org/tip/2b91ec9f551b56751cde48792f1c0a1130358844
Author:Frederic Weisbecker
AuthorDate:Wed, 02 Dec 2020 12:57:29 +01:00
The following commit has been merged into the irq/core branch of tip:
Commit-ID: d3759e7184f8f6187e62f8c4e7dcb1f6c47c075a
Gitweb:
https://git.kernel.org/tip/d3759e7184f8f6187e62f8c4e7dcb1f6c47c075a
Author:Frederic Weisbecker
AuthorDate:Wed, 02 Dec 2020 12:57:31 +01:00
The following commit has been merged into the irq/core branch of tip:
Commit-ID: 8a6a5920d3286eb0eae9f36a4ec4fc9df511eccb
Gitweb:
https://git.kernel.org/tip/8a6a5920d3286eb0eae9f36a4ec4fc9df511eccb
Author:Frederic Weisbecker
AuthorDate:Wed, 02 Dec 2020 12:57:30 +01:00
The following commit has been merged into the irq/core branch of tip:
Commit-ID: 7197688b2006357da75a014e0a76be89ca9c2d46
Gitweb:
https://git.kernel.org/tip/7197688b2006357da75a014e0a76be89ca9c2d46
Author:Frederic Weisbecker
AuthorDate:Wed, 02 Dec 2020 12:57:28 +01:00
The following commit has been merged into the irq/core branch of tip:
Commit-ID: d14ce74f1fb376ccbbc0b05ded477ada51253729
Gitweb:
https://git.kernel.org/tip/d14ce74f1fb376ccbbc0b05ded477ada51253729
Author:Frederic Weisbecker
AuthorDate:Wed, 02 Dec 2020 12:57:32 +01:00
destination
dispatch decision to the core code and leave only the actual per-index
cputime accounting to the architecture.
Signed-off-by: Frederic Weisbecker
Cc: Peter Zijlstra
Cc: Tony Luck
Cc: Fenghua Yu
Cc: Michael Ellerman
Cc: Benjamin Herrenschmidt
Cc: Paul Mackerras
Cc: Heiko Carstens
nting aware of architectures that have
their own way of accounting idle time and convert s390 to use it.
This prepares s390 to get involved in further consolidations of IRQ
time accounting.
Signed-off-by: Frederic Weisbecker
Cc: Peter Zijlstra
Cc: Tony Luck
Cc: Fenghua Yu
Cc: Michael Ellerma
patch
code to handle the extra offset.
Signed-off-by: Frederic Weisbecker
Cc: Peter Zijlstra
Cc: Tony Luck
Cc: Fenghua Yu
Cc: Michael Ellerman
Cc: Benjamin Herrenschmidt
Cc: Paul Mackerras
Cc: Heiko Carstens
Cc: Vasily Gorbik
Cc: Christian Borntraeger
---
include/linux/hardirq.h
ksoftirqd instead of the hardirq bottom half.
Also tick_irq_enter() then becomes appropriately covered by lockdep.
Signed-off-by: Frederic Weisbecker
Cc: Peter Zijlstra
Cc: Tony Luck
Cc: Fenghua Yu
Cc: Michael Ellerman
Cc: Benjamin Herrenschmidt
Cc: Paul Mackerras
Cc: Heiko Carstens
Cc: Vasily
irq/core-v3
HEAD: 24a2d6c76759bd4496cbdcd365012c821a984eec
Thanks,
Frederic
---
Frederic Weisbecker (5):
sched/cputime: Remove symbol exports from IRQ time accounting
s390/vtime: Use the generic IRQ entry accounting
sched/vtime: Consolidate IRQ time accounting
i
account_irq_enter_time() and account_irq_exit_time() are not called
from modules. EXPORT_SYMBOL_GPL() can be safely removed from the IRQ
cputime accounting functions called from there.
Signed-off-by: Frederic Weisbecker
Cc: Peter Zijlstra
Cc: Tony Luck
Cc: Fenghua Yu
Cc: Michael Ellerman
Cc
On Tue, Dec 01, 2020 at 02:34:49PM +0100, Thomas Gleixner wrote:
> On Tue, Dec 01 2020 at 12:40, Frederic Weisbecker wrote:
> > On Tue, Dec 01, 2020 at 12:33:26PM +0100, Thomas Gleixner wrote:
> >> > /*
> >> > * We do not account for softirq time from ksof
On Tue, Dec 01, 2020 at 12:33:26PM +0100, Thomas Gleixner wrote:
> On Tue, Dec 01 2020 at 10:20, Peter Zijlstra wrote:
> > On Tue, Dec 01, 2020 at 01:12:25AM +0100, Frederic Weisbecker wrote:
> > Why not something like:
> >
> > void irqtime_account_irq(struct task_struct
On Tue, Dec 01, 2020 at 10:20:11AM +0100, Peter Zijlstra wrote:
> On Tue, Dec 01, 2020 at 01:12:25AM +0100, Frederic Weisbecker wrote:
> > +static s64 irqtime_get_delta(struct irqtime *irqtime)
> > {
> > + int cpu = smp_processor_id();
> > s64 delta;
> >
patch
code to handle the extra offset.
Signed-off-by: Frederic Weisbecker
Cc: Peter Zijlstra
Cc: Tony Luck
Cc: Fenghua Yu
Cc: Michael Ellerman
Cc: Benjamin Herrenschmidt
Cc: Paul Mackerras
Cc: Heiko Carstens
Cc: Vasily Gorbik
Cc: Christian Borntraeger
---
include/linux/hardirq.h
ksoftirqd instead of the hardirq bottom half.
Also tick_irq_enter() then becomes appropriately covered by lockdep.
Signed-off-by: Frederic Weisbecker
Cc: Peter Zijlstra
Cc: Tony Luck
Cc: Fenghua Yu
Cc: Michael Ellerman
Cc: Benjamin Herrenschmidt
Cc: Paul Mackerras
Cc: Heiko Carstens
Cc: Vasily
destination
dispatch decision to the core code and leave only the actual per-index
cputime accounting to the architecture.
For now only ia64 and powerpc are handled. s390 will need a slightly
different treatment.
Signed-off-by: Frederic Weisbecker
Cc: Peter Zijlstra
Cc: Tony Luck
Cc: Fenghua Yu
account_irq_enter_time() and account_irq_exit_time() are not called
from modules. EXPORT_SYMBOL_GPL() can be safely removed from the IRQ
cputime accounting functions called from there.
Signed-off-by: Frederic Weisbecker
Cc: Peter Zijlstra
Cc: Tony Luck
Cc: Fenghua Yu
Cc: Michael Ellerman
Cc
-dynticks.git
irq/core
HEAD: 2e6155a86dba7c53d080d58284ef5c65f487bef0
Frederic Weisbecker (5):
sched/cputime: Remove symbol exports from IRQ time accounting
sched/vtime: Consolidate IRQ time accounting
s390/vtime: Convert to consolidated IRQ time accounting
irqtime: Move irqtime entry
dated IRQ time accounting is easy:
just keep the current behaviour and redirect generic idle time
accounting to system time accounting.
This removes the need to maintain an ad-hoc implementation of cputime
dispatch decision.
Signed-off-by: Frederic Weisbecker
Cc: Peter Zijlstra
Cc: Tony Luc
On Mon, Nov 23, 2020 at 09:22:08PM +0800, Yunfeng Ye wrote:
> In realtime scenarios, the "nohz_full" parameter is configured. Tick
> interference is not expected when there is only one realtime thread.
> But when the idle thread is switched to the realtime thread, the tick
> timer is restarted alwa
ksoftirqd instead of the hardirq bottom half.
Also tick_irq_enter() then becomes appropriately covered by lockdep.
Signed-off-by: Frederic Weisbecker
Cc: Peter Zijlstra
Cc: Tony Luck
Cc: Fenghua Yu
Cc: Michael Ellerman
Cc: Benjamin Herrenschmidt
Cc: Paul Mackerras
Cc: Heiko Carstens
Cc: Vasily
destination
dispatch decision to the core code and leave only the actual per-index
cputime accounting to the architecture.
For now only ia64 and powerpc are handled. s390 will need a slightly
different treatment.
Signed-off-by: Frederic Weisbecker
Cc: Peter Zijlstra
Cc: Tony Luck
Cc: Fenghua Yu
x27;ll need to check
that thoroughly).
git://git.kernel.org/pub/scm/linux/kernel/git/frederic/linux-dynticks.git
irq/core
HEAD: 9502ee20aed8bb847176e1d7d83ccd0625430744
Frederic Weisbecker (4):
sched/vtime: Consolidate IRQ time accounting
s390/vtime: Convert to consolidated IRQ time accountin
dated IRQ time accounting is easy:
just keep the current behaviour and redirect generic idle time
accounting to system time accounting.
This removes the need to maintain an ad-hoc implementation of cputime
dispatch decision.
Signed-off-by: Frederic Weisbecker
Cc: Peter Zijlstra
Cc: Tony Luc
patch
code to handle the extra offset.
Signed-off-by: Frederic Weisbecker
Cc: Peter Zijlstra
Cc: Tony Luck
Cc: Fenghua Yu
Cc: Michael Ellerman
Cc: Benjamin Herrenschmidt
Cc: Paul Mackerras
Cc: Heiko Carstens
Cc: Vasily Gorbik
Cc: Christian Borntraeger
---
include/linux/hardirq.h
On Tue, Nov 24, 2020 at 01:06:15AM +0100, Thomas Gleixner wrote:
> On Tue, Nov 24 2020 at 00:58, Frederic Weisbecker wrote:
> > On Mon, Nov 23, 2020 at 08:27:33PM +0100, Thomas Gleixner wrote:
> >> On Mon, Nov 23 2020 at 14:44, Frederic Weisbecker wrote:
> >> > On F
On Mon, Nov 23, 2020 at 08:27:33PM +0100, Thomas Gleixner wrote:
> On Mon, Nov 23 2020 at 14:44, Frederic Weisbecker wrote:
> > On Fri, Nov 13, 2020 at 03:02:21PM +0100, Thomas Gleixner wrote:
> >> + /*
> >> + * Adjust softirq count to SOFTIRQ_OFFSET which makes
&g
On Mon, Nov 23, 2020 at 10:39:34PM +, Alex Belits wrote:
>
> On Mon, 2020-11-23 at 23:29 +0100, Frederic Weisbecker wrote:
> > External Email
> >
> > ---
> > ---
> > On Mon, Nov 23, 2020 at
On Mon, Nov 23, 2020 at 05:58:42PM +, Alex Belits wrote:
> From: Yuri Norov
>
> Make sure that kick_all_cpus_sync() does not call CPUs that are running
> isolated tasks.
>
> Signed-off-by: Yuri Norov
> [abel...@marvell.com: use safe task_isolation_cpumask() implementation]
> Signed-off-by:
Hi Alex,
On Mon, Nov 23, 2020 at 05:58:22PM +, Alex Belits wrote:
> From: Yuri Norov
>
> For nohz_full CPUs the desirable behavior is to receive interrupts
> generated by tick_nohz_full_kick_cpu(). But for hard isolation it's
> obviously not desirable because it breaks isolation.
>
> This p
On Mon, Nov 23, 2020 at 08:27:33PM +0100, Thomas Gleixner wrote:
> On Mon, Nov 23 2020 at 14:44, Frederic Weisbecker wrote:
> > On Fri, Nov 13, 2020 at 03:02:21PM +0100, Thomas Gleixner wrote:
> >> + /*
> >> + * Adjust softirq count to SOFTIRQ_OFFSET which makes
&g
201 - 300 of 3065 matches
Mail list logo