On Mon, Dec 28, 2020 at 11:47:48AM +0800, chenshiyan wrote:
> From: "shiyan.csy"
>
> exit nohz idle before invoking softirq, or it maymiss
> some ticks during softirq.
>
> Signed-off-by: Shiyan Chen
> ---
> kernel/softirq.c | 9 +++--
> 1 file changed, 7 insertions(+), 2 deletions(-)
>
>
. McKenney
Fixes: 96d3fd0d315a (rcu: Break call_rcu() deadlock involving scheduler and
perf)
Cc: sta...@vger.kernel.org
Cc: Peter Zijlstra
Cc: Rafael J. Wysocki
Cc: Thomas Gleixner
Cc: Ingo Molnar
Signed-off-by: Frederic Weisbecker
---
kernel/sched/idle.c | 18 --
1 file changed, 12
Molnar
Signed-off-by: Frederic Weisbecker
---
arch/arm/mach-imx/cpuidle-imx6q.c | 7 ++-
1 file changed, 6 insertions(+), 1 deletion(-)
diff --git a/arch/arm/mach-imx/cpuidle-imx6q.c
b/arch/arm/mach-imx/cpuidle-imx6q.c
index 094337dc1bc7..31a60d257d3d 100644
--- a/arch/arm/mach-imx/cpuidle
idle)
Cc: sta...@vger.kernel.org
Cc: Len Brown
Cc: Peter Zijlstra
Cc: Rafael J. Wysocki
Cc: Thomas Gleixner
Cc: Ingo Molnar
Signed-off-by: Frederic Weisbecker
---
drivers/acpi/processor_idle.c | 10 --
1 file changed, 8 insertions(+), 2 deletions(-)
diff --git a/drivers/acpi
Molnar
Signed-off-by: Frederic Weisbecker
---
drivers/cpuidle/cpuidle.c | 33 +
1 file changed, 25 insertions(+), 8 deletions(-)
diff --git a/drivers/cpuidle/cpuidle.c b/drivers/cpuidle/cpuidle.c
index ef2ea1b12cd8..4cc1ba49ce05 100644
--- a/drivers/cpuidle
,
Frederic
---
Frederic Weisbecker (4):
sched/idle: Fix missing need_resched() check after rcu_idle_enter()
cpuidle: Fix missing need_resched() check after rcu_idle_enter()
ARM: imx6q: Fix missing need_resched() check after rcu_idle_enter()
ACPI: processor: Fix
On Thu, Dec 17, 2020 at 02:51:58PM +0800, Yunfeng Ye wrote:
>
>
> On 2020/12/15 22:47, Frederic Weisbecker wrote:
> > On Tue, Dec 15, 2020 at 08:06:34PM +0800, Yunfeng Ye wrote:
> >> The idle_exittime field of tick_sched is used to record the time when
> >> the i
On Tue, Dec 15, 2020 at 09:04:07PM -0800, Paul E. McKenney wrote:
> Hello, Frederic,
>
> Are you seeing rcutorture writer stalls? Please see attached for an
> example from testing, search for "Call Trace". I am running an overnight
> test, which should get me some idea of frequency. My thought
On Fri, Nov 13, 2020 at 01:13:32PM +0100, Frederic Weisbecker wrote:
> RCU needs to check if the cpu hotplug lock is held, in the middle of
> other conditions to check the sanity of RCU-nocb. Provide a helper for
> that.
>
> Signed-off-by: Frederic Weisbecker
> Cc: Paul E. Mc
On Tue, Dec 15, 2020 at 08:06:34PM +0800, Yunfeng Ye wrote:
> The idle_exittime field of tick_sched is used to record the time when
> the idle state was left. but currently the idle_exittime is updated in
> the function tick_nohz_restart_sched_tick(), which is not always in idle
> state when
The following commit has been merged into the core/rcu branch of tip:
Commit-ID: e3771c850d3b9349b48449c9a91c98944a08650c
Gitweb:
https://git.kernel.org/tip/e3771c850d3b9349b48449c9a91c98944a08650c
Author:Frederic Weisbecker
AuthorDate:Mon, 21 Sep 2020 14:43:40 +02:00
On Sat, Dec 12, 2020 at 01:16:12AM +0100, Thomas Gleixner wrote:
> On Fri, Dec 11 2020 at 23:21, Frederic Weisbecker wrote:
> > On Sun, Dec 06, 2020 at 10:12:54PM +0100, Thomas Gleixner wrote:
> >> tick_handover_do_timer() which is invoked when a CPU is unplugged has a
>
gned-off-by: Thomas Gleixner
Acked-by: Frederic Weisbecker
Thanks!
On Sun, Dec 06, 2020 at 10:12:54PM +0100, Thomas Gleixner wrote:
> tick_handover_do_timer() which is invoked when a CPU is unplugged has a
> check for cpumask_first(cpu_online_mask) when it tries to hand over the
> tick update duty.
>
> Checking the result of cpumask_first() there is pointless
he timekeeping stale until I realized that stop_machine() is running at that
time. Might be worth adding a comment about that.
Also why not just setting it to TICK_DO_TIMER_NONE and be done with it? Perhaps
to avoid that all the CPUs to compete and contend on jiffies update after stop
machine?
If so:
Reviewed-by: Frederic Weisbecker
Thanks.
ITE_ONCE()
> with smp_load_acquire() / smp_store_release().
>
> On 32bit problem #2 is addressed by protecting the quick check with the
> jiffies sequence counter. The load and stores can be plain because the
> sequence count mechanics provides the required barriers already.
>
> Signed-off-by: Thomas Gleixner
Looks very good! Thanks!
Reviewed-by: Frederic Weisbecker
On Thu, Dec 10, 2020 at 04:46:38PM -0800, Paul E. McKenney wrote:
> > diff --git a/kernel/softirq.c b/kernel/softirq.c
> > index 09229ad82209..7d558cb7a037 100644
> > --- a/kernel/softirq.c
> > +++ b/kernel/softirq.c
> > @@ -650,7 +650,9 @@ static void run_ksoftirqd(unsigned int cpu)
> >
On Thu, Dec 10, 2020 at 01:17:56PM -0800, Paul E. McKenney wrote:
> And please see attached. Lots of output, in fact, enough that it
> was still dumping when the second instance happened.
Thanks!
So the issue is that ksoftirqd is parked on CPU down with vectors
still pending. Either:
1)
Hi,
On Wed, Nov 18, 2020 at 09:52:18AM -0800, Paul E. McKenney wrote:
> Hello, Frederic,
>
> Here is the last few months' pile of warnings from rcutorture runs.
>
> Thanx, Paul
>
> [ 255.098527] NOHZ tick-stop error: Non-RCU local softirq
On Wed, Nov 18, 2020 at 09:54:20AM -0800, Paul E. McKenney wrote:
> On Wed, Nov 18, 2020 at 09:52:18AM -0800, Paul E. McKenney wrote:
> > Hello, Frederic,
> >
> > Here is the last few months' pile of warnings from rcutorture runs.
>
> And this time with scenario names. ;-)
>
>
On Tue, Dec 08, 2020 at 10:24:09AM -0800, Paul E. McKenney wrote:
> > It reduces the code scope running with BH disabled.
> > Also narrowing down helps to understand what it actually protects.
>
> I thought that you would call out unnecessarily delaying other softirq
> handlers. ;-)
>
> But if
On Tue, Dec 08, 2020 at 09:19:27AM -0800, Paul E. McKenney wrote:
> On Tue, Dec 08, 2020 at 04:54:57PM +0100, Frederic Weisbecker wrote:
> > On Tue, Dec 08, 2020 at 06:58:10AM -0800, Paul E. McKenney wrote:
> > > Hello, Frederic,
> > >
> > > Boqun just asked
On Tue, Dec 08, 2020 at 06:58:10AM -0800, Paul E. McKenney wrote:
> Hello, Frederic,
>
> Boqun just asked if RCU callbacks ran in BH-disabled context to avoid
> concurrent execution of the same callback. Of course, this raises the
> question of whether a self-posting callback can have two
Hi Boqun Feng,
On Tue, Dec 08, 2020 at 10:41:31AM +0800, Boqun Feng wrote:
> Hi Frederic,
>
> On Fri, Nov 13, 2020 at 01:13:15PM +0100, Frederic Weisbecker wrote:
> > This keeps growing up. Rest assured, most of it is debug code and sanity
> > checks.
> >
> >
pending warnings, which would happen when the task which holds
> + * softirq_ctrl::lock was the only running task on the CPU and blocks on
> + * some other lock.
> + */
> +bool local_bh_blocked(void)
> +{
> + return this_cpu_read(softirq_ctrl.cnt) != 0;
__this_cpu_read()
Reviewed-by: Frederic Weisbecker
; + local_unlock(_ctrl.lock);
> + }
> +}
> +
> +static inline bool should_wake_ksoftirqd(void)
> +{
> + return !this_cpu_read(softirq_ctrl.cnt);
And that too.
Other than these boring details:
Reviewed-by: Frederic Weisbecker
Thanks.
On Fri, Dec 04, 2020 at 06:01:55PM +0100, Thomas Gleixner wrote:
> +void __local_bh_disable_ip(unsigned long ip, unsigned int cnt)
> +{
> + unsigned long flags;
> + int newcnt;
> +
> + WARN_ON_ONCE(in_hardirq());
> +
> + /* First entry of a task into a BH disabled section? */
> +
in one go.
>
> Signed-off-by: Thomas Gleixner
Reviewed-by: Frederic Weisbecker
%gs:0x0(%rip),%eax
> - 9a2: 25 ff ff ff 7f and$0x7fff,%eax
> + 9a2: 25 00 ff ff 00 and$0x00,%eax
>
> Reported-by: Sebastian Andrzej Siewior
> Signed-off-by: Thomas Gleixner
Reviewed-by: Frederic Weisbecker
Also I'm seeing
local lock can be preempted so its required to
> keep track of the nest count per task.
>
> Add a RT only counter to task struct and adjust the relevant macros in
> preempt.h.
>
> Signed-off-by: Thomas Gleixner
Reviewed-by: Frederic Weisbecker
On Mon, Dec 07, 2020 at 01:25:13PM +0100, Peter Zijlstra wrote:
> On Mon, Dec 07, 2020 at 02:10:13AM +0100, Frederic Weisbecker wrote:
> > On Sun, Dec 06, 2020 at 10:40:07PM +0100, Thomas Gleixner wrote:
> > > syzbot reported KCSAN data races vs. timer_base::timer
On Mon, Dec 07, 2020 at 01:57:25AM +0100, Thomas Gleixner wrote:
> On Mon, Dec 07 2020 at 01:23, Frederic Weisbecker wrote:
> >> --- a/kernel/sched/cputime.c
> >> +++ b/kernel/sched/cputime.c
> >> @@ -60,7 +60,7 @@ void irqtime_account_irq(struct task_str
On Sun, Dec 06, 2020 at 10:40:07PM +0100, Thomas Gleixner wrote:
> syzbot reported KCSAN data races vs. timer_base::timer_running being set to
> NULL without holding base::lock in expire_timers().
>
> This looks innocent and most reads are clearly not problematic but for a
> non-RT kernel it's
On Fri, Dec 04, 2020 at 06:01:53PM +0100, Thomas Gleixner wrote:
> vtime_account_irq and irqtime_account_irq() base checks on preempt_count()
> which fails on RT because preempt_count() does not contain the softirq
> accounting which is seperate on RT.
>
> These checks do not need the full
The following commit has been merged into the irq/core branch of tip:
Commit-ID: 2b91ec9f551b56751cde48792f1c0a1130358844
Gitweb:
https://git.kernel.org/tip/2b91ec9f551b56751cde48792f1c0a1130358844
Author:Frederic Weisbecker
AuthorDate:Wed, 02 Dec 2020 12:57:29 +01:00
The following commit has been merged into the irq/core branch of tip:
Commit-ID: d3759e7184f8f6187e62f8c4e7dcb1f6c47c075a
Gitweb:
https://git.kernel.org/tip/d3759e7184f8f6187e62f8c4e7dcb1f6c47c075a
Author:Frederic Weisbecker
AuthorDate:Wed, 02 Dec 2020 12:57:31 +01:00
The following commit has been merged into the irq/core branch of tip:
Commit-ID: 8a6a5920d3286eb0eae9f36a4ec4fc9df511eccb
Gitweb:
https://git.kernel.org/tip/8a6a5920d3286eb0eae9f36a4ec4fc9df511eccb
Author:Frederic Weisbecker
AuthorDate:Wed, 02 Dec 2020 12:57:30 +01:00
The following commit has been merged into the irq/core branch of tip:
Commit-ID: 7197688b2006357da75a014e0a76be89ca9c2d46
Gitweb:
https://git.kernel.org/tip/7197688b2006357da75a014e0a76be89ca9c2d46
Author:Frederic Weisbecker
AuthorDate:Wed, 02 Dec 2020 12:57:28 +01:00
The following commit has been merged into the irq/core branch of tip:
Commit-ID: d14ce74f1fb376ccbbc0b05ded477ada51253729
Gitweb:
https://git.kernel.org/tip/d14ce74f1fb376ccbbc0b05ded477ada51253729
Author:Frederic Weisbecker
AuthorDate:Wed, 02 Dec 2020 12:57:32 +01:00
destination
dispatch decision to the core code and leave only the actual per-index
cputime accounting to the architecture.
Signed-off-by: Frederic Weisbecker
Cc: Peter Zijlstra
Cc: Tony Luck
Cc: Fenghua Yu
Cc: Michael Ellerman
Cc: Benjamin Herrenschmidt
Cc: Paul Mackerras
Cc: Heiko Carstens
aware of architectures that have
their own way of accounting idle time and convert s390 to use it.
This prepares s390 to get involved in further consolidations of IRQ
time accounting.
Signed-off-by: Frederic Weisbecker
Cc: Peter Zijlstra
Cc: Tony Luck
Cc: Fenghua Yu
Cc: Michael Ellerman
Cc
code to handle the extra offset.
Signed-off-by: Frederic Weisbecker
Cc: Peter Zijlstra
Cc: Tony Luck
Cc: Fenghua Yu
Cc: Michael Ellerman
Cc: Benjamin Herrenschmidt
Cc: Paul Mackerras
Cc: Heiko Carstens
Cc: Vasily Gorbik
Cc: Christian Borntraeger
---
include/linux/hardirq.h | 4
ksoftirqd instead of the hardirq bottom half.
Also tick_irq_enter() then becomes appropriately covered by lockdep.
Signed-off-by: Frederic Weisbecker
Cc: Peter Zijlstra
Cc: Tony Luck
Cc: Fenghua Yu
Cc: Michael Ellerman
Cc: Benjamin Herrenschmidt
Cc: Paul Mackerras
Cc: Heiko Carstens
Cc: Vasily
irq/core-v3
HEAD: 24a2d6c76759bd4496cbdcd365012c821a984eec
Thanks,
Frederic
---
Frederic Weisbecker (5):
sched/cputime: Remove symbol exports from IRQ time accounting
s390/vtime: Use the generic IRQ entry accounting
sched/vtime: Consolidate IRQ time accounting
i
account_irq_enter_time() and account_irq_exit_time() are not called
from modules. EXPORT_SYMBOL_GPL() can be safely removed from the IRQ
cputime accounting functions called from there.
Signed-off-by: Frederic Weisbecker
Cc: Peter Zijlstra
Cc: Tony Luck
Cc: Fenghua Yu
Cc: Michael Ellerman
Cc
On Tue, Dec 01, 2020 at 02:34:49PM +0100, Thomas Gleixner wrote:
> On Tue, Dec 01 2020 at 12:40, Frederic Weisbecker wrote:
> > On Tue, Dec 01, 2020 at 12:33:26PM +0100, Thomas Gleixner wrote:
> >> > /*
> >> > * We do not account for softirq time from k
On Tue, Dec 01, 2020 at 12:33:26PM +0100, Thomas Gleixner wrote:
> On Tue, Dec 01 2020 at 10:20, Peter Zijlstra wrote:
> > On Tue, Dec 01, 2020 at 01:12:25AM +0100, Frederic Weisbecker wrote:
> > Why not something like:
> >
> > void irqtime_account_irq(struct task_struct
On Tue, Dec 01, 2020 at 10:20:11AM +0100, Peter Zijlstra wrote:
> On Tue, Dec 01, 2020 at 01:12:25AM +0100, Frederic Weisbecker wrote:
> > +static s64 irqtime_get_delta(struct irqtime *irqtime)
> > {
> > + int cpu = smp_processor_id();
> > s64 delta;
> >
code to handle the extra offset.
Signed-off-by: Frederic Weisbecker
Cc: Peter Zijlstra
Cc: Tony Luck
Cc: Fenghua Yu
Cc: Michael Ellerman
Cc: Benjamin Herrenschmidt
Cc: Paul Mackerras
Cc: Heiko Carstens
Cc: Vasily Gorbik
Cc: Christian Borntraeger
---
include/linux/hardirq.h | 4
ksoftirqd instead of the hardirq bottom half.
Also tick_irq_enter() then becomes appropriately covered by lockdep.
Signed-off-by: Frederic Weisbecker
Cc: Peter Zijlstra
Cc: Tony Luck
Cc: Fenghua Yu
Cc: Michael Ellerman
Cc: Benjamin Herrenschmidt
Cc: Paul Mackerras
Cc: Heiko Carstens
Cc: Vasily
destination
dispatch decision to the core code and leave only the actual per-index
cputime accounting to the architecture.
For now only ia64 and powerpc are handled. s390 will need a slightly
different treatment.
Signed-off-by: Frederic Weisbecker
Cc: Peter Zijlstra
Cc: Tony Luck
Cc: Fenghua Yu
account_irq_enter_time() and account_irq_exit_time() are not called
from modules. EXPORT_SYMBOL_GPL() can be safely removed from the IRQ
cputime accounting functions called from there.
Signed-off-by: Frederic Weisbecker
Cc: Peter Zijlstra
Cc: Tony Luck
Cc: Fenghua Yu
Cc: Michael Ellerman
Cc
-dynticks.git
irq/core
HEAD: 2e6155a86dba7c53d080d58284ef5c65f487bef0
Frederic Weisbecker (5):
sched/cputime: Remove symbol exports from IRQ time accounting
sched/vtime: Consolidate IRQ time accounting
s390/vtime: Convert to consolidated IRQ time accounting
irqtime: Move irqtime entry
IRQ time accounting is easy:
just keep the current behaviour and redirect generic idle time
accounting to system time accounting.
This removes the need to maintain an ad-hoc implementation of cputime
dispatch decision.
Signed-off-by: Frederic Weisbecker
Cc: Peter Zijlstra
Cc: Tony Luck
Cc
On Mon, Nov 23, 2020 at 09:22:08PM +0800, Yunfeng Ye wrote:
> In realtime scenarios, the "nohz_full" parameter is configured. Tick
> interference is not expected when there is only one realtime thread.
> But when the idle thread is switched to the realtime thread, the tick
> timer is restarted
ksoftirqd instead of the hardirq bottom half.
Also tick_irq_enter() then becomes appropriately covered by lockdep.
Signed-off-by: Frederic Weisbecker
Cc: Peter Zijlstra
Cc: Tony Luck
Cc: Fenghua Yu
Cc: Michael Ellerman
Cc: Benjamin Herrenschmidt
Cc: Paul Mackerras
Cc: Heiko Carstens
Cc: Vasily
destination
dispatch decision to the core code and leave only the actual per-index
cputime accounting to the architecture.
For now only ia64 and powerpc are handled. s390 will need a slightly
different treatment.
Signed-off-by: Frederic Weisbecker
Cc: Peter Zijlstra
Cc: Tony Luck
Cc: Fenghua Yu
that thoroughly).
git://git.kernel.org/pub/scm/linux/kernel/git/frederic/linux-dynticks.git
irq/core
HEAD: 9502ee20aed8bb847176e1d7d83ccd0625430744
Frederic Weisbecker (4):
sched/vtime: Consolidate IRQ time accounting
s390/vtime: Convert to consolidated IRQ time accounting
irqtime: Move irqtime
IRQ time accounting is easy:
just keep the current behaviour and redirect generic idle time
accounting to system time accounting.
This removes the need to maintain an ad-hoc implementation of cputime
dispatch decision.
Signed-off-by: Frederic Weisbecker
Cc: Peter Zijlstra
Cc: Tony Luck
Cc
code to handle the extra offset.
Signed-off-by: Frederic Weisbecker
Cc: Peter Zijlstra
Cc: Tony Luck
Cc: Fenghua Yu
Cc: Michael Ellerman
Cc: Benjamin Herrenschmidt
Cc: Paul Mackerras
Cc: Heiko Carstens
Cc: Vasily Gorbik
Cc: Christian Borntraeger
---
include/linux/hardirq.h | 4
On Tue, Nov 24, 2020 at 01:06:15AM +0100, Thomas Gleixner wrote:
> On Tue, Nov 24 2020 at 00:58, Frederic Weisbecker wrote:
> > On Mon, Nov 23, 2020 at 08:27:33PM +0100, Thomas Gleixner wrote:
> >> On Mon, Nov 23 2020 at 14:44, Frederic Weisbecker wrote:
> >> > On F
On Mon, Nov 23, 2020 at 08:27:33PM +0100, Thomas Gleixner wrote:
> On Mon, Nov 23 2020 at 14:44, Frederic Weisbecker wrote:
> > On Fri, Nov 13, 2020 at 03:02:21PM +0100, Thomas Gleixner wrote:
> >> + /*
> >> + * Adjust softirq count to SOFTIRQ_OFFSET which makes
&g
On Mon, Nov 23, 2020 at 10:39:34PM +, Alex Belits wrote:
>
> On Mon, 2020-11-23 at 23:29 +0100, Frederic Weisbecker wrote:
> > External Email
> >
> > ---
> > ---
> > On Mon, Nov 23, 2020 at
On Mon, Nov 23, 2020 at 05:58:42PM +, Alex Belits wrote:
> From: Yuri Norov
>
> Make sure that kick_all_cpus_sync() does not call CPUs that are running
> isolated tasks.
>
> Signed-off-by: Yuri Norov
> [abel...@marvell.com: use safe task_isolation_cpumask() implementation]
> Signed-off-by:
Hi Alex,
On Mon, Nov 23, 2020 at 05:58:22PM +, Alex Belits wrote:
> From: Yuri Norov
>
> For nohz_full CPUs the desirable behavior is to receive interrupts
> generated by tick_nohz_full_kick_cpu(). But for hard isolation it's
> obviously not desirable because it breaks isolation.
>
> This
On Mon, Nov 23, 2020 at 08:27:33PM +0100, Thomas Gleixner wrote:
> On Mon, Nov 23 2020 at 14:44, Frederic Weisbecker wrote:
> > On Fri, Nov 13, 2020 at 03:02:21PM +0100, Thomas Gleixner wrote:
> >> + /*
> >> + * Adjust softirq count to SOFTIRQ_OFFSET which makes
&g
On Fri, Nov 13, 2020 at 03:02:21PM +0100, Thomas Gleixner wrote:
> +void __local_bh_enable_ip(unsigned long ip, unsigned int cnt)
> +{
> + bool preempt_on = preemptible();
> + unsigned long flags;
> + u32 pending;
> + int curcnt;
> +
> + WARN_ON_ONCE(in_irq());
> +
The following commit has been merged into the core/entry branch of tip:
Commit-ID: 6775de4984ea83ce39f19a40c09f8813d7423831
Gitweb:
https://git.kernel.org/tip/6775de4984ea83ce39f19a40c09f8813d7423831
Author:Frederic Weisbecker
AuthorDate:Tue, 17 Nov 2020 16:16:36 +01:00
The following commit has been merged into the core/entry branch of tip:
Commit-ID: 9f68b5b74c48761bcbd7d90cf1426049bdbaabb7
Gitweb:
https://git.kernel.org/tip/9f68b5b74c48761bcbd7d90cf1426049bdbaabb7
Author:Frederic Weisbecker
AuthorDate:Tue, 17 Nov 2020 16:16:35 +01:00
The following commit has been merged into the core/entry branch of tip:
Commit-ID: 179a9cf79212bb3b96fb69a314583189cd863c5b
Gitweb:
https://git.kernel.org/tip/179a9cf79212bb3b96fb69a314583189cd863c5b
Author:Frederic Weisbecker
AuthorDate:Tue, 17 Nov 2020 16:16:34 +01:00
The following commit has been merged into the core/entry branch of tip:
Commit-ID: 83c2da2e605c73aafcc02df04b2dbf1ccbfc24c0
Gitweb:
https://git.kernel.org/tip/83c2da2e605c73aafcc02df04b2dbf1ccbfc24c0
Author:Frederic Weisbecker
AuthorDate:Tue, 17 Nov 2020 16:16:33 +01:00
The following commit has been merged into the core/entry branch of tip:
Commit-ID: d1f250e2205eca9f1264f8e2d3a41fcf38f92d91
Gitweb:
https://git.kernel.org/tip/d1f250e2205eca9f1264f8e2d3a41fcf38f92d91
Author:Frederic Weisbecker
AuthorDate:Tue, 17 Nov 2020 16:16:37 +01:00
On Fri, Nov 13, 2020 at 03:02:21PM +0100, Thomas Gleixner wrote:
> +void __local_bh_disable_ip(unsigned long ip, unsigned int cnt)
> +{
> + unsigned long flags;
> + int newcnt;
> +
> + WARN_ON_ONCE(in_hardirq());
> +
> + /* First entry of a task into a BH disabled section? */
> +
On Thu, Nov 19, 2020 at 01:59:03PM -0800, Paul E. McKenney wrote:
> On Thu, Nov 19, 2020 at 01:30:24AM +0100, Frederic Weisbecker wrote:
> > The implementation expects `lscpu` to have a "CPU: " line, for example:
> >
> > CPU(s): 8
> >
> > Bu
On Thu, Nov 19, 2020 at 07:34:13PM +0100, Thomas Gleixner wrote:
> On Thu, Nov 19 2020 at 13:18, Frederic Weisbecker wrote:
> > On Fri, Nov 13, 2020 at 03:02:19PM +0100, Thomas Gleixner wrote:
> >> RT requires the softirq to be preemptible and uses a per CPU local lock to
> &
patch at least:
Reviewed-by: Frederic Weisbecker
On Fri, Nov 13, 2020 at 03:02:19PM +0100, Thomas Gleixner wrote:
> RT requires the softirq to be preemptible and uses a per CPU local lock to
> protect BH disabled sections and softirq processing. Therefore RT cannot
> use the preempt counter to keep track of BH disabled/serving.
>
> Add a RT
following warning (still with the local taste):
kvm-test-1-run.sh: ligne 138 : test: : nombre entier attendu comme
expression
Just use a command whose output every language agree with.
Signed-off-by: Frederic Weisbecker
Cc: Paul E. McKenney
Cc: Josh Triplett
Cc: Steven Rostedt
C
On Wed, Nov 18, 2020 at 03:05:03PM +0100, Peter Zijlstra wrote:
> On Wed, Nov 18, 2020 at 02:48:26PM +0100, Frederic Weisbecker wrote:
> > On Wed, Nov 18, 2020 at 08:39:47AM +0100, Peter Zijlstra wrote:
> > > On Tue, Nov 17, 2020 at 04:16:32PM +0100, Frederic Weisbecker wrote
On Wed, Nov 18, 2020 at 08:39:47AM +0100, Peter Zijlstra wrote:
> On Tue, Nov 17, 2020 at 04:16:32PM +0100, Frederic Weisbecker wrote:
> > Frederic Weisbecker (5):
> > context_tracking: Introduce HAVE_CONTEXT_TRACKING_OFFSTACK
> > context_tracking: Don't implement
()
anymore and has therefore earned CONFIG_HAVE_CONTEXT_TRACKING_OFFSTACK.
Signed-off-by: Frederic Weisbecker
Cc: Marcelo Tosatti
Cc: Paul E. McKenney
Cc: Peter Zijlstra
Cc: Phil Auld
Cc: Thomas Gleixner
---
arch/x86/Kconfig | 1 +
1 file changed, 1 insertion(+)
diff --git a/arch/x86/Kconfig b
tracking state had to be saved on the task stack
and set back to CONTEXT_KERNEL temporarily in order to safely switch to
another task.
Only a few archs use it now (namely sparc64 and powerpc64) and those
implementing HAVE_CONTEXT_TRACKING_OFFSTACK definetly can't rely on it.
Signed-off-by: Frederic
removed and we can now get rid of these workarounds
in this architecture.
Create a Kconfig feature to express this achievement.
Signed-off-by: Frederic Weisbecker
Cc: Marcelo Tosatti
Cc: Paul E. McKenney
Cc: Peter Zijlstra
Cc: Phil Auld
Cc: Thomas Gleixner
---
arch/Kconfig | 17
.
Signed-off-by: Frederic Weisbecker
Cc: Marcelo Tosatti
Cc: Paul E. McKenney
Cc: Peter Zijlstra
Cc: Phil Auld
Cc: Thomas Gleixner
---
kernel/sched/core.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index d2003a7d5ab5..c23d7cb5aee3 100644
/frederic/linux-dynticks.git
core/isolation-v3
HEAD: b358a96584150feacc20d7d10410fd1b7c7c19fe
Thanks,
Frederic
---
Frederic Weisbecker (5):
context_tracking: Introduce HAVE_CONTEXT_TRACKING_OFFSTACK
context_tracking: Don't implement exception_enter/exit
CONFIG_HAVE_CONTEXT_TRACKING_OFFSTACK and those can
therefore afford not to implement this hack.
Signed-off-by: Frederic Weisbecker
Cc: Marcelo Tosatti
Cc: Paul E. McKenney
Cc: Peter Zijlstra
Cc: Phil Auld
Cc: Thomas Gleixner
---
include/linux/context_tracking.h | 6 --
1 file changed, 4 insertions(+), 2 deletions
-by: Paul E. McKenney
Signed-off-by: Frederic Weisbecker
Cc: Paul E. McKenney
Cc: Josh Triplett
Cc: Steven Rostedt
Cc: Mathieu Desnoyers
Cc: Lai Jiangshan
Cc: Joel Fernandes
Cc: Neeraj Upadhyay
Cc: Thomas Gleixner
Cc: Boqun Feng
---
kernel/rcu/tree.c | 12 +---
1 file changed, 9
will be to wait for all pending callbacks
to be processed before completing a CPU down operation.
Suggested-by: Paul E. McKenney
Signed-off-by: Frederic Weisbecker
Cc: Paul E. McKenney
Cc: Josh Triplett
Cc: Steven Rostedt
Cc: Mathieu Desnoyers
Cc: Lai Jiangshan
Cc: Joel Fernandes
Cc
Gather the segcblist properties in a common map to avoid spreading
booleans in the structure. And this prepares for the offloaded state to
be mutable on runtime.
Inspired-by: Paul E. McKenney
Signed-off-by: Frederic Weisbecker
Cc: Paul E. McKenney
Cc: Josh Triplett
Cc: Steven Rostedt
Cc
("rcutorture: Test runtime toggling of CPUs' callback offloading") should
be moved on top of this pile and include this fixup.
Signed-off-by: Frederic Weisbecker
Cc: Paul E. McKenney
Cc: Josh Triplett
Cc: Steven Rostedt
Cc: Mathieu Desnoyers
Cc: Lai Jiangshan
Cc: Joel Fernandes
stop processing the callbacks locally.
Ordering must be carefully enforced so that the callbacks that used to
be processed locally without locking must have their latest updates
visible by the time they get processed by the kthreads.
Inspired-by: Paul E. McKenney
Signed-off-by: Frederic Weisbecker
during these intermediate
states. Some pieces there may still be necessary.
Inspired-by: Paul E. McKenney
Signed-off-by: Frederic Weisbecker
Cc: Paul E. McKenney
Cc: Josh Triplett
Cc: Steven Rostedt
Cc: Mathieu Desnoyers
Cc: Lai Jiangshan
Cc: Joel Fernandes
Cc: Neeraj Upadhyay
Cc: Thomas
RCU needs to check if the cpu hotplug lock is held, in the middle of
other conditions to check the sanity of RCU-nocb. Provide a helper for
that.
Signed-off-by: Frederic Weisbecker
Cc: Paul E. McKenney
Cc: Josh Triplett
Cc: Steven Rostedt
Cc: Mathieu Desnoyers
Cc: Lai Jiangshan
Cc: Joel
Make sure the nocb timer can't fire anymore before we reach the final
de-offload state. Spuriously waking up the GP kthread is no big deal but
we must prevent from executing the timer callback without nocb locking.
Inspired-by: Paul E. McKenney
Signed-off-by: Frederic Weisbecker
Cc: Paul E
("rcutorture: Test runtime toggling of CPUs' callback offloading") should
be moved on top of this pile and include this fixup.
Signed-off-by: Frederic Weisbecker
Cc: Paul E. McKenney
Cc: Josh Triplett
Cc: Steven Rostedt
Cc: Mathieu Desnoyers
Cc: Lai Jiangshan
Cc: Joel Fernandes
Add periodic toggling of 7 CPUs over 8 every second in order to test
NOCB toggle code. Choose TREE01 for that as it's already testing nocb.
Inspired-by: Paul E. McKenney
Signed-off-by: Frederic Weisbecker
Cc: Paul E. McKenney
Cc: Josh Triplett
Cc: Steven Rostedt
Cc: Mathieu Desnoyers
Cc
Make sure to handle the pending bypass queue before we switch to the
final de-offload state. We'll have to be careful and later set
SEGCBLIST_SOFTIRQ_ONLY before re-enabling again IRQs, or new bypass
callbacks could be queued in the meantine.
Inspired-by: Paul E. McKenney
Signed-off-by: Frederic
are also safe. NOCB kthreads and timers have
their own means of synchronization against the offloaded state updaters.
Signed-off-by: Frederic Weisbecker
Cc: Paul E. McKenney
Cc: Josh Triplett
Cc: Steven Rostedt
Cc: Mathieu Desnoyers
Cc: Lai Jiangshan
Cc: Joel Fernandes
Cc: Neeraj Upadhyay
RCU needs to check if the current code is running a specific timer
callback, in the middle of other conditions to check the sanity of
RCU-nocb. Provide a helper for that.
Signed-off-by: Frederic Weisbecker
Cc: Paul E. McKenney
Cc: Josh Triplett
Cc: Steven Rostedt
Cc: Mathieu Desnoyers
Cc
rcu_do_batch() will be callable concurrently by softirqs and offloaded
processing. So make sure we actually call cond resched only from the
offloaded context.
Inspired-by: Paul E. McKenney
Signed-off-by: Frederic Weisbecker
Cc: Paul E. McKenney
Cc: Josh Triplett
Cc: Steven Rostedt
Cc
201 - 300 of 8299 matches
Mail list logo