[PATCH 02/19] rcu/nocb: Provide basic callback offloading state machine bits

2020-11-13 Thread Frederic Weisbecker
for the state machine that will carry up all the steps to enforce correctness while serving callbacks processing all along. Inspired-by: Paul E. McKenney Signed-off-by: Frederic Weisbecker Cc: Paul E. McKenney Cc: Josh Triplett Cc: Steven Rostedt Cc: Mathieu Desnoyers Cc: Lai Jiangshan Cc: Joel

[PATCH 10/19] rcu/nocb: Set SEGCBLIST_SOFTIRQ_ONLY at the very last stage of de-offloading

2020-11-13 Thread Frederic Weisbecker
Set SEGCBLIST_SOFTIRQ_ONLY once everything is settled. After that, the callbacks are handled locklessly and locally. Inspired-by: Paul E. McKenney Signed-off-by: Frederic Weisbecker Cc: Paul E. McKenney Cc: Josh Triplett Cc: Steven Rostedt Cc: Mathieu Desnoyers Cc: Lai Jiangshan Cc: Joel

[PATCH 06/19] rcu/nocb: De-offloading GP kthread

2020-11-13 Thread Frederic Weisbecker
notify the de-offloading worker so that it can resume the de-offloading while being sure that callbacks won't be handled remotely anymore. Inspired-by: Paul E. McKenney Signed-off-by: Frederic Weisbecker Cc: Paul E. McKenney Cc: Josh Triplett Cc: Steven Rostedt Cc: Mathieu Desnoyers Cc: Lai

[PATCH 13/19] rcu/nocb: Locally accelerate callbacks as long as offloading isn't complete

2020-11-13 Thread Frederic Weisbecker
The local callbacks processing checks if some callbacks need acceleration. Keep that behaviour under nocb lock protection when rcu_core() executes concurrently with GP/CB kthreads. Inspired-by: Paul E. McKenney Signed-off-by: Frederic Weisbecker Cc: Paul E. McKenney Cc: Josh Triplett Cc

[PATCH 04/19] rcu/nocb: De-offloading CB kthread

2020-11-13 Thread Frederic Weisbecker
must notify the de-offloading worker so that it can resume the de-offloading while being sure that callbacks won't be handled remotely anymore. Inspired-by: Paul E. McKenney Signed-off-by: Frederic Weisbecker Cc: Paul E. McKenney Cc: Josh Triplett Cc: Steven Rostedt Cc: Mathieu Desnoyers Cc

[PATCH 00/19] rcu/nocb: De-offload and re-offload support v4

2020-11-13 Thread Frederic Weisbecker
ill passes TREE01 (but I had to fight!) git://git.kernel.org/pub/scm/linux/kernel/git/frederic/linux-dynticks.git rcu/nocb-toggle-v4 HEAD: 579e15efa48fb6fc4ecf14961804051f385807fe Thanks, Frederic --- Frederic Weisbecker (19): rcu/nocb: Turn enabled/offload states into a c

Re: [PATCH 4/5] context_tracking: Only define schedule_user() on !HAVE_CONTEXT_TRACKING_OFFSTACK archs

2020-11-11 Thread Frederic Weisbecker
On Wed, Nov 11, 2020 at 03:34:58PM +0100, Peter Zijlstra wrote: > On Tue, Oct 27, 2020 at 04:08:26PM +0100, Frederic Weisbecker wrote: > > schedule_user() was traditionally used by the entry code's tail to > > preempt userspace after the call to user_enter(). Indeed the call to

Re: [PATCH 2/5] context_tracking: Don't implement exception_enter/exit() on CONFIG_HAVE_CONTEXT_TRACKING_OFFSTACK

2020-11-11 Thread Frederic Weisbecker
On Wed, Nov 11, 2020 at 03:32:18PM +0100, Peter Zijlstra wrote: > On Tue, Oct 27, 2020 at 04:08:24PM +0100, Frederic Weisbecker wrote: > > An architecture that provides this Kconfig feature doesn't need to > > store the context tracking state on the task stack because its entry &g

Re: [RFC PATCH 6/7] preempt/dynamic: Provide irqentry_exit_cond_resched() static call

2020-11-10 Thread Frederic Weisbecker
On Tue, Nov 10, 2020 at 11:32:21AM +0100, Peter Zijlstra wrote: > On Tue, Nov 10, 2020 at 01:56:08AM +0100, Frederic Weisbecker wrote: > > [convert from static key to static call, only define static call when > > PREEMPT_DYNAMIC] > > > noinstr void irqentry_e

Re: [RFC PATCH 1/7] static_call/x86: Add __static_call_returnl0()

2020-11-10 Thread Frederic Weisbecker
On Tue, Nov 10, 2020 at 11:13:07AM +0100, Peter Zijlstra wrote: > On Tue, Nov 10, 2020 at 10:55:15AM +0100, Peter Zijlstra wrote: > > On Tue, Nov 10, 2020 at 01:56:03AM +0100, Frederic Weisbecker wrote: > > > > > [fweisbec: s/disp16/data16, integrate into text_get_insn(),

Re: [RFC PATCH 1/7] static_call/x86: Add __static_call_returnl0()

2020-11-10 Thread Frederic Weisbecker
On Tue, Nov 10, 2020 at 10:55:15AM +0100, Peter Zijlstra wrote: > On Tue, Nov 10, 2020 at 01:56:03AM +0100, Frederic Weisbecker wrote: > > diff --git a/arch/x86/kernel/alternative.c b/arch/x86/kernel/alternative.c > > index 2400ad62f330..37592f576a10 100644 > > --- a/arch/x86

[RFC PATCH 2/7] static_call: Pull some static_call declarations to the type headers

2020-11-09 Thread Frederic Weisbecker
Molnar Cc: Michal Hocko Cc: Paul E. McKenney Signed-off-by: Frederic Weisbecker --- include/linux/static_call.h | 30 include/linux/static_call_types.h | 33 +++ 2 files changed, 33 insertions(+), 30 deletions(-) diff --git a/include

[RFC PATCH 1/7] static_call/x86: Add __static_call_returnl0()

2020-11-09 Thread Frederic Weisbecker
Molnar Cc: Michal Hocko Cc: Paul E. McKenney [fweisbec: s/disp16/data16, integrate into text_get_insn(), elaborate comment on the resulting insn, emulate on int3 trap, provide validation, uninline __static_call_return0() for HAVE_STATIC_CALL] Signed-off-by: Frederic Weisbecker --- arch/x86

[RFC PATCH 3/7] preempt: Introduce CONFIG_PREEMPT_DYNAMIC

2020-11-09 Thread Frederic Weisbecker
() / __preempt_schedule_notrace_function()). Suggested-by: Peter Zijlstra Signed-off-by: Michal Hocko Cc: Peter Zijlstra (Intel) Cc: Thomas Gleixner Cc: Mel Gorman Cc: Ingo Molnar Cc: Paul E. McKenney [Added documentation, reorganize dependencies on top of static call, etc...] Signed-off-by: Frederic Weisbecker

[RFC PATCH 5/7] preempt/dynamic: Provide preempt_schedule[_notrace]() static calls

2020-11-09 Thread Frederic Weisbecker
h provided wrapper, if any. Signed-off-by: Peter Zijlstra (Intel) Cc: Thomas Gleixner Cc: Mel Gorman Cc: Ingo Molnar Cc: Michal Hocko Cc: Paul E. McKenney [only define static calls when PREEMPT_DYNAMIC, make it less dependent on x86 with __preempt_schedule_func()] Signed-off-by: Frederic Weisbecker

[RFC PATCH 0/7] preempt: Tune preemption flavour on boot v3

2020-11-09 Thread Frederic Weisbecker
This is a reworked version of what came out of the debate between Michal Hocko and Peter Zijlstra in order to tune the preemption mode from kernel parameters, see v2 in: https://lore.kernel.org/lkml/20201009122926.29962-1-mho...@kernel.org/ I mostly fetched the raw diff from Peter's proof of

[RFC PATCH 6/7] preempt/dynamic: Provide irqentry_exit_cond_resched() static call

2020-11-09 Thread Frederic Weisbecker
n't passed. Signed-off-by: Peter Zijlstra (Intel) Cc: Thomas Gleixner Cc: Mel Gorman Cc: Ingo Molnar Cc: Michal Hocko Cc: Paul E. McKenney [convert from static key to static call, only define static call when PREEMPT_DYNAMIC] Signed-off-by: Frederic Weisbecker --- include/linux/entry-com

[RFC PATCH 7/7] preempt/dynamic: Support dynamic preempt with preempt= boot option

2020-11-09 Thread Frederic Weisbecker
Signed-off-by: Frederic Weisbecker --- kernel/sched/core.c | 67 - 1 file changed, 66 insertions(+), 1 deletion(-) diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 6715caa17ea7..84ac05d2df3a 100644 --- a/kernel/sched/core.c +++ b/kernel/sc

[RFC PATCH 4/7] preempt/dynamic: Provide cond_resched() and might_resched() static calls

2020-11-09 Thread Frederic Weisbecker
heir calls are ignored when preempt= isn't passed. Signed-off-by: Peter Zijlstra (Intel) Cc: Thomas Gleixner Cc: Mel Gorman Cc: Ingo Molnar Cc: Michal Hocko Cc: Paul E. McKenney [branch might_resched() directly to __cond_resched(), only define static calls when PREEMPT_DYNAMIC] Signed-off-by

Re: [PATCH 05/16] rcu: De-offloading CB kthread

2020-11-04 Thread Frederic Weisbecker
On Wed, Nov 04, 2020 at 10:42:09PM +0800, Boqun Feng wrote: > On Wed, Nov 04, 2020 at 03:31:35PM +0100, Frederic Weisbecker wrote: > [...] > > > > > > > + rcu_segcblist_offload(cblist, false); > > > > + raw_spin_unlock_rcu_node(rnp); > >

Re: [PATCH v9 4/7] rcu/trace: Add tracing for how segcb list changes

2020-11-04 Thread Frederic Weisbecker
On Wed, Nov 04, 2020 at 06:08:07AM -0800, Paul E. McKenney wrote: > On Tue, Nov 03, 2020 at 04:17:31PM +0100, Frederic Weisbecker wrote: > > On Tue, Nov 03, 2020 at 09:26:00AM -0500, Joel Fernandes (Google) wrote: > > > +/* > > > + * Return how many CBs each segment alo

Re: [PATCH 05/16] rcu: De-offloading CB kthread

2020-11-04 Thread Frederic Weisbecker
On Mon, Nov 02, 2020 at 09:38:24PM +0800, Boqun Feng wrote: > Hi Frederic, > > Could you copy the r...@vger.kernel.org if you have another version, it > will help RCU hobbyists like me to catch up news in RCU, thanks! ;-) Sure! Will do! > > +static int __rcu_nocb_rdp_deoffload(struct rcu_data

Re: [PATCH v9 3/7] srcu: Fix invoke_rcu_callbacks() segcb length adjustment

2020-11-03 Thread Frederic Weisbecker
On Tue, Nov 03, 2020 at 10:07:38AM -0500, Joel Fernandes wrote: > On Tue, Nov 03, 2020 at 03:47:14PM +0100, Frederic Weisbecker wrote: > > On Tue, Nov 03, 2020 at 09:25:59AM -0500, Joel Fernandes (Google) wrote: > > > With earlier patches, the negative counting of the unsegmented

Re: [PATCH v9 4/7] rcu/trace: Add tracing for how segcb list changes

2020-11-03 Thread Frederic Weisbecker
On Tue, Nov 03, 2020 at 09:26:00AM -0500, Joel Fernandes (Google) wrote: > +/* > + * Return how many CBs each segment along with their gp_seq values. > + * > + * This function is O(N) where N is the number of segments. Only used from > + * tracing code which is usually disabled in production. > +

Re: [PATCH v9 3/7] srcu: Fix invoke_rcu_callbacks() segcb length adjustment

2020-11-03 Thread Frederic Weisbecker
to adjust > the segmented list's length. > > Reviewed-by: Frederic Weisbecker > Suggested-by: Frederic Weisbecker > Signed-off-by: Joel Fernandes (Google) This breaks bisection, you need to either fix up the previous patch by adding this diff inside or better yet: expand what you did in &quo

Re: [PATCH v3 4/6] irq_work: Unconditionally build on SMP

2020-10-28 Thread Frederic Weisbecker
On Wed, Oct 28, 2020 at 12:07:11PM +0100, Peter Zijlstra wrote: This may need a changelog :-) > > Signed-off-by: Peter Zijlstra (Intel) > --- > kernel/Makefile |1 + > kernel/irq_work.c |3 +++ > 2 files changed, 4 insertions(+) > > --- a/kernel/Makefile > +++ b/kernel/Makefile >

Re: [PATCH v3 2/6] smp: Cleanup smp_call_function*()

2020-10-28 Thread Frederic Weisbecker
On Wed, Oct 28, 2020 at 12:07:09PM +0100, Peter Zijlstra wrote: > Get rid of the __call_single_node union and cleanup the API a little > to avoid external code relying on the structure layout as much. > > Signed-off-by: Peter Zijlstra (Intel) Reviewed-by: Frederic Weisbecker

Re: [PATCH v3 1/6] irq_work: Cleanup

2020-10-28 Thread Frederic Weisbecker
_flags), }, > + .func = (_func),\ > +} Reviewed-by: Frederic Weisbecker Thanks.

Re: [PATCH v3 5/6] irq_work: Provide irq_work_queue_remote()

2020-10-28 Thread Frederic Weisbecker
On Wed, Oct 28, 2020 at 03:53:24PM +0100, Peter Zijlstra wrote: > On Wed, Oct 28, 2020 at 02:40:46PM +0100, Frederic Weisbecker wrote: > > On Wed, Oct 28, 2020 at 12:07:12PM +0100, Peter Zijlstra wrote: > > > While the traditional irq_work relies on the ability to self-IPI, it

Re: [PATCH v3 5/6] irq_work: Provide irq_work_queue_remote()

2020-10-28 Thread Frederic Weisbecker
On Wed, Oct 28, 2020 at 12:07:12PM +0100, Peter Zijlstra wrote: > While the traditional irq_work relies on the ability to self-IPI, it > makes sense to provide an unconditional irq_work_queue_remote() > interface. We may need a reason as well here. > --- a/kernel/rcu/tree.c > +++

Re: [PATCH v3 3/6] irq_work: Optimize irq_work_single()

2020-10-28 Thread Frederic Weisbecker
On Wed, Oct 28, 2020 at 12:07:10PM +0100, Peter Zijlstra wrote: > Trade one atomic op for a full memory barrier. > > Signed-off-by: Peter Zijlstra (Intel) Reviewed-by: Frederic Weisbecker

[PATCH 5/5] x86: Support HAVE_CONTEXT_TRACKING_OFFSTACK

2020-10-27 Thread Frederic Weisbecker
() anymore and has therefore earned CONFIG_HAVE_CONTEXT_TRACKING_OFFSTACK. Signed-off-by: Frederic Weisbecker Cc: Marcelo Tosatti Cc: Paul E. McKenney Cc: Peter Zijlstra Cc: Phil Auld Cc: Thomas Gleixner --- arch/x86/Kconfig | 1 + 1 file changed, 1 insertion(+) diff --git a/arch/x86/Kconfig b

[PATCH 0/5] context_tracking: Flatter archs not using exception_enter/exit() v2

2020-10-27 Thread Frederic Weisbecker
/frederic/linux-dynticks.git core/isolation-v2 HEAD: 79f60f3dd0e0aea8b17c825371d8697444ae5faf Thanks, Frederic --- Frederic Weisbecker (5): context_tracking: Introduce HAVE_CONTEXT_TRACKING_OFFSTACK context_tracking: Don't implement exception_enter/exit

[PATCH 4/5] context_tracking: Only define schedule_user() on !HAVE_CONTEXT_TRACKING_OFFSTACK archs

2020-10-27 Thread Frederic Weisbecker
tracking state had to be saved on the task stack and set back to CONTEXT_KERNEL temporarily in order to safely switch to another task. Only a few archs use it now and those implementing HAVE_CONTEXT_TRACKING_OFFSTACK definetly can't rely on it. Signed-off-by: Frederic Weisbecker Cc: Marcelo

[PATCH 1/5] context_tracking: Introduce HAVE_CONTEXT_TRACKING_OFFSTACK

2020-10-27 Thread Frederic Weisbecker
removed and we can now get rid of these workarounds in this architecture. Create a Kconfig feature to express this achievement. Signed-off-by: Frederic Weisbecker Cc: Marcelo Tosatti Cc: Paul E. McKenney Cc: Peter Zijlstra Cc: Phil Auld Cc: Thomas Gleixner --- arch/Kconfig | 17

[PATCH 3/5] sched: Detect call to schedule from critical entry code

2020-10-27 Thread Frederic Weisbecker
. Signed-off-by: Frederic Weisbecker Cc: Marcelo Tosatti Cc: Paul E. McKenney Cc: Peter Zijlstra Cc: Phil Auld Cc: Thomas Gleixner --- kernel/sched/core.c | 1 + 1 file changed, 1 insertion(+) diff --git a/kernel/sched/core.c b/kernel/sched/core.c index d2003a7d5ab5..c23d7cb5aee3 100644

[PATCH 2/5] context_tracking: Don't implement exception_enter/exit() on CONFIG_HAVE_CONTEXT_TRACKING_OFFSTACK

2020-10-27 Thread Frederic Weisbecker
explicitly annotated. Hence the exception_enter()/exception_exit() couple doesn't need to be implemented in this case. Signed-off-by: Frederic Weisbecker Cc: Marcelo Tosatti Cc: Paul E. McKenney Cc: Peter Zijlstra Cc: Phil Auld Cc: Thomas Gleixner --- include/linux/context_tracking.h | 6 -- 1

Re: [RFC PATCH v2 0/5] allow overriding default preempt mode from command line

2020-10-27 Thread Frederic Weisbecker
On Fri, Oct 09, 2020 at 07:45:54PM +0200, Peter Zijlstra wrote: > +DEFINE_STATIC_KEY_TRUE(irq_preemption_key); > + > +/* > + * SC:cond_resched > + * SC:might_resched > + * SC:preempt_schedule > + * SC:preempt_schedule_notrace > + * SB:irq_preemption_key > + * > + * > + * ZERO > + * cond_resched

Re: [patch 1/2] nohz: only wakeup a single target cpu when kicking a task

2020-10-26 Thread Frederic Weisbecker
On Thu, Oct 15, 2020 at 12:12:35PM +0200, Peter Zijlstra wrote: > On Thu, Oct 15, 2020 at 01:40:53AM +0200, Frederic Weisbecker wrote: > > > re tick_nohz_task_switch() being placed wrong, it should probably be > > > placed before finish_lock_switch(). Something like so. >

Re: [PATCH 3/5] sched: Detect call to schedule from critical entry code

2020-10-26 Thread Frederic Weisbecker
On Wed, Oct 07, 2020 at 10:34:36AM +0100, Mel Gorman wrote: > On Mon, Oct 05, 2020 at 02:26:48PM +0200, Frederic Weisbecker wrote: > > On Mon, Oct 05, 2020 at 01:23:53PM +0200, Peter Zijlstra wrote: > > > On Mon, Oct 05, 2020 at 12:49:17PM +0200, Frederic Weisbecker wrote: &g

Re: [PATCH v8 5/6] rcu/tree: segcblist: Remove redundant smp_mb()s

2020-10-26 Thread Frederic Weisbecker
65;6003;1cOn Wed, Oct 21, 2020 at 03:08:12PM -0400, Joel Fernandes (Google) wrote: > This memory barrier is not needed as rcu_segcblist_add_len() already > includes a memory barrier *before* the length of the list is updated. *before* and *after*. As you have both cases below. Thanks > >

Re: [PATCH v8 3/6] rcu/trace: Add tracing for how segcb list changes

2020-10-26 Thread Frederic Weisbecker
On Wed, Oct 21, 2020 at 03:08:10PM -0400, Joel Fernandes (Google) wrote: > Track how the segcb list changes before/after acceleration, during > queuing and during dequeuing. > > This has proved useful to discover an optimization to avoid unwanted GP > requests when there are no callbacks

Re: [PATCH v8 2/6] rcu/segcblist: Add counters to segcblist datastructure

2020-10-26 Thread Frederic Weisbecker
On Mon, Oct 26, 2020 at 01:45:57AM -0400, Joel Fernandes wrote: > On Mon, Oct 26, 2020 at 01:50:58AM +0100, Frederic Weisbecker wrote: > > On Wed, Oct 21, 2020 at 03:08:09PM -0400, Joel Fernandes (Google) wrote: > > > bool rcu_segcblist_accelerate(struct rcu_segcblist *rs

Re: [PATCH v8 2/6] rcu/segcblist: Add counters to segcblist datastructure

2020-10-26 Thread Frederic Weisbecker
On Mon, Oct 26, 2020 at 01:40:43AM -0400, Joel Fernandes wrote: > On Mon, Oct 26, 2020 at 01:32:12AM +0100, Frederic Weisbecker wrote: > > You seem to have forgotten the suggestion? > > > > rclp->len += rcu_segcblist_get_seglen(rsclp, i) > > I decided to keep it

Re: [PATCH v8 2/6] rcu/segcblist: Add counters to segcblist datastructure

2020-10-25 Thread Frederic Weisbecker
On Wed, Oct 21, 2020 at 03:08:09PM -0400, Joel Fernandes (Google) wrote: > bool rcu_segcblist_accelerate(struct rcu_segcblist *rsclp, unsigned long seq) > { > - int i; > + int i, j; > > WARN_ON_ONCE(!rcu_segcblist_is_enabled(rsclp)); > if (rcu_segcblist_restempty(rsclp,

Re: [PATCH v8 2/6] rcu/segcblist: Add counters to segcblist datastructure

2020-10-25 Thread Frederic Weisbecker
On Wed, Oct 21, 2020 at 03:08:09PM -0400, Joel Fernandes (Google) wrote: > @@ -307,6 +317,7 @@ void rcu_segcblist_extract_done_cbs(struct rcu_segcblist > *rsclp, > > if (!rcu_segcblist_ready_cbs(rsclp)) > return; /* Nothing to do. */ > + rclp->len =

[PATCH 01/16] rcu: Implement rcu_segcblist_is_offloaded() config dependent

2020-10-23 Thread Frederic Weisbecker
This simplify the usage of this API and avoid checking the kernel config from the callers. Suggested-by: Paul E. McKenney Signed-off-by: Frederic Weisbecker Cc: Paul E. McKenney Cc: Josh Triplett Cc: Steven Rostedt Cc: Mathieu Desnoyers Cc: Lai Jiangshan Cc: Joel Fernandes Cc: Neeraj

[PATCH 14/16] rcu: Locally accelerate callbacks as long as offloading isn't complete

2020-10-23 Thread Frederic Weisbecker
The local callbacks processing checks if some callbacks need acceleration. Keep that behaviour under nocb lock protection when rcu_core() executes concurrently with GP/CB kthreads. Inspired-by: Paul E. McKenney Signed-off-by: Frederic Weisbecker Cc: Paul E. McKenney Cc: Josh Triplett Cc

[PATCH 16/16] tools/rcutorture: Support nocb toggle in TREE01

2020-10-23 Thread Frederic Weisbecker
Add periodic toggling of 7 CPUs over 8 every second in order to test NOCB toggle code. Choose TREE01 for that as it's already testing nocb. Inspired-by: Paul E. McKenney Signed-off-by: Frederic Weisbecker Cc: Paul E. McKenney Cc: Josh Triplett Cc: Steven Rostedt Cc: Mathieu Desnoyers Cc

[PATCH 04/16] rcu/nocb: Always init segcblist on CPU up

2020-10-23 Thread Frederic Weisbecker
-by: Paul E. McKenney Signed-off-by: Frederic Weisbecker Cc: Paul E. McKenney Cc: Josh Triplett Cc: Steven Rostedt Cc: Mathieu Desnoyers Cc: Lai Jiangshan Cc: Joel Fernandes Cc: Neeraj Upadhyay --- kernel/rcu/tree.c | 12 +--- 1 file changed, 9 insertions(+), 3 deletions(-) diff

[PATCH 15/16] rcutorture: Test runtime toggling of CPUs' callback offloading

2020-10-23 Thread Frederic Weisbecker
From: "Paul E. McKenney" Frederic Weisbecker is adding the ability to change the rcu_nocbs state of CPUs at runtime, that is, to offload and deoffload their RCU callback processing without the need to reboot. As the old saying goes, "if it ain't tested, it don't work", so t

[PATCH 11/16] rcu: Set SEGCBLIST_SOFTIRQ_ONLY at the very last stage of de-offloading

2020-10-23 Thread Frederic Weisbecker
Set SEGCBLIST_SOFTIRQ_ONLY once everything is settled. After that, the callbacks are handled locklessly and locally. Inspired-by: Paul E. McKenney Signed-off-by: Frederic Weisbecker Cc: Paul E. McKenney Cc: Josh Triplett Cc: Steven Rostedt Cc: Mathieu Desnoyers Cc: Lai Jiangshan Cc: Joel

[PATCH 12/16] rcu/nocb: Only cond_resched() from actual offloaded batch processing

2020-10-23 Thread Frederic Weisbecker
rcu_do_batch() will be callable concurrently by softirqs and offloaded processing. So make sure we actually call cond resched only from the offloaded context. Inspired-by: Paul E. McKenney Signed-off-by: Frederic Weisbecker Cc: Paul E. McKenney Cc: Josh Triplett Cc: Steven Rostedt Cc

[PATCH 06/16] rcu/nocb: Don't deoffload an offline CPU with pending work

2020-10-23 Thread Frederic Weisbecker
will be to wait for all pending callbacks to be processed before completing a CPU down operation. Suggested-by: Paul E. McKenney Signed-off-by: Frederic Weisbecker Cc: Paul E. McKenney Cc: Josh Triplett Cc: Steven Rostedt Cc: Mathieu Desnoyers Cc: Lai Jiangshan Cc: Joel Fernandes Cc

[PATCH 13/16] rcu: Process batch locally as long as offloading isn't complete

2020-10-23 Thread Frederic Weisbecker
during these intermediate states. Some pieces there may still be necessary. Inspired-by: Paul E. McKenney Signed-off-by: Frederic Weisbecker Cc: Paul E. McKenney Cc: Josh Triplett Cc: Steven Rostedt Cc: Mathieu Desnoyers Cc: Lai Jiangshan Cc: Joel Fernandes Cc: Neeraj Upadhyay --- kernel/rcu

[PATCH 02/16] rcu: Turn enabled/offload states into a common flag

2020-10-23 Thread Frederic Weisbecker
Gather the segcblist properties in a common map to avoid spreading booleans in the structure. And this prepares for the offloaded state to be mutable on runtime. Inspired-by: Paul E. McKenney Signed-off-by: Frederic Weisbecker Cc: Paul E. McKenney Cc: Josh Triplett Cc: Steven Rostedt Cc

[PATCH 00/16] rcu/nocb: De-offload and re-offload support v3

2020-10-23 Thread Frederic Weisbecker
e5cc6634810985b405baca Thanks, Frederic --- Frederic Weisbecker (15): rcu: Implement rcu_segcblist_is_offloaded() config dependent rcu: Turn enabled/offload states into a common flag rcu: Provide basic callback offloading state machine bits rcu/nocb: Always init seg

[PATCH 09/16] rcu: Shutdown nocb timer on de-offloading

2020-10-23 Thread Frederic Weisbecker
Make sure the nocb timer can't fire anymore before we reach the final de-offload state. Spuriously waking up the GP kthread is no big deal but we must prevent from executing the timer callback without nocb locking. Inspired-by: Paul E. McKenney Signed-off-by: Frederic Weisbecker Cc: Paul E

[PATCH 08/16] rcu: Re-offload support

2020-10-23 Thread Frederic Weisbecker
stop processing the callbacks locally. Ordering must be carefully enforced so that the callbacks that used to be processed locally without locking must have their latest updates visible by the time they get processed by the kthreads. Inspired-by: Paul E. McKenney Signed-off-by: Frederic Weisbecker

[PATCH 03/16] rcu: Provide basic callback offloading state machine bits

2020-10-23 Thread Frederic Weisbecker
for the state machine that will carry up all the steps to enforce correctness while serving callbacks processing all along. Inspired-by: Paul E. McKenney Signed-off-by: Frederic Weisbecker Cc: Paul E. McKenney Cc: Josh Triplett Cc: Steven Rostedt Cc: Mathieu Desnoyers Cc: Lai Jiangshan Cc: Joel

[PATCH 10/16] rcu: Flush bypass before setting SEGCBLIST_SOFTIRQ_ONLY

2020-10-23 Thread Frederic Weisbecker
Make sure to handle the pending bypass queue before we switch to the final de-offload state. We'll have to be careful and later set SEGCBLIST_SOFTIRQ_ONLY before re-enabling again IRQs, or new bypass callbacks could be queued in the meantine. Inspired-by: Paul E. McKenney Signed-off-by: Frederic

[PATCH 07/16] rcu: De-offloading GP kthread

2020-10-23 Thread Frederic Weisbecker
notify the de-offloading worker so that it can resume the de-offloading while being sure that callbacks won't be handled remotely anymore. Inspired-by: Paul E. McKenney Signed-off-by: Frederic Weisbecker Cc: Paul E. McKenney Cc: Josh Triplett Cc: Steven Rostedt Cc: Mathieu Desnoyers Cc: Lai

[PATCH 05/16] rcu: De-offloading CB kthread

2020-10-23 Thread Frederic Weisbecker
must notify the de-offloading worker so that it can resume the de-offloading while being sure that callbacks won't be handled remotely anymore. Inspired-by: Paul E. McKenney Signed-off-by: Frederic Weisbecker Cc: Paul E. McKenney Cc: Josh Triplett Cc: Steven Rostedt Cc: Mathieu Desnoyers Cc

Re: [PATCH v4 2/4] sched/isolation: Extend nohz_full to isolate managed IRQs

2020-10-23 Thread Frederic Weisbecker
es isolation for maintaining lower latency for the listed CPUs. > > > > Suggested-by: Frederic Weisbecker Ah and yes there is this tag :-p So that's my bad, I really thought this thing was about managed IRQ. The problem is that I can't find a single documentation about them so I'm

Re: [patch 1/2] nohz: only wakeup a single target cpu when kicking a task

2020-10-22 Thread Frederic Weisbecker
On Tue, Oct 20, 2020 at 03:52:45PM -0300, Marcelo Tosatti wrote: > On Thu, Oct 15, 2020 at 01:40:53AM +0200, Frederic Weisbecker wrote: > > Alternatively, we could rely on p->on_rq which is set to TASK_ON_RQ_QUEUED > > at wake up time, prior to the schedule() full

Re: [PATCH v7 2/6] rcu/segcblist: Add counters to segcblist datastructure

2020-10-21 Thread Frederic Weisbecker
On Wed, Oct 21, 2020 at 11:33:14AM -0400, j...@joelfernandes.org wrote: > On Thu, Oct 15, 2020 at 02:21:58PM +0200, Frederic Weisbecker wrote: > > On Wed, Oct 14, 2020 at 08:22:57PM -0400, Joel Fernandes (Google) wrote: > > > Add counting of segment lengths of segme

Re: [PATCH v7 6/6] rcu/segcblist: Add additional comments to explain smp_mb()

2020-10-21 Thread Frederic Weisbecker
On Wed, Oct 21, 2020 at 11:57:04AM -0700, Joel Fernandes wrote: > On Mon, Oct 19, 2020 at 5:37 AM Frederic Weisbecker > wrote: > > Now, reading the documentation of rcu_barrier() (thanks to you!): > > > > Pseudo-code using rcu_barrier() is as follows: > >

Re: [PATCH v4 4/4] PCI: Limit pci_alloc_irq_vectors() to housekeeping CPUs

2020-10-19 Thread Frederic Weisbecker
On Mon, Oct 19, 2020 at 01:11:37PM +0200, Peter Zijlstra wrote: > > > And what are the (desired) semantics vs hotplug? Using a cpumask without > > > excluding hotplug is racy. > > > > The housekeeping_mask should still remain constant, isn't? > > In any case, I can double check this. > > The

Re: [PATCH v7 6/6] rcu/segcblist: Add additional comments to explain smp_mb()

2020-10-19 Thread Frederic Weisbecker
On Sat, Oct 17, 2020 at 08:35:56PM -0400, j...@joelfernandes.org wrote: > On Sat, Oct 17, 2020 at 03:29:54PM +0200, Frederic Weisbecker wrote: > > > C rcubarrier+ctrldep > > > > > > (* > > > * Result: Never > > > * > > > *

Re: [PATCH v7 6/6] rcu/segcblist: Add additional comments to explain smp_mb()

2020-10-17 Thread Frederic Weisbecker
On Fri, Oct 16, 2020 at 11:19:41PM -0400, j...@joelfernandes.org wrote: > On Fri, Oct 16, 2020 at 09:27:53PM -0400, j...@joelfernandes.org wrote: > [..] > > > > + * > > > > + * Memory barrier is needed after adding to length for the case > > > > + * where length transitions from 0 -> 1. This is

Re: [PATCH v7 6/6] rcu/segcblist: Add additional comments to explain smp_mb()

2020-10-15 Thread Frederic Weisbecker
On Wed, Oct 14, 2020 at 08:23:01PM -0400, Joel Fernandes (Google) wrote: > Memory barriers are needed when updating the full length of the > segcblist, however it is not fully clearly why one is needed before and > after. This patch therefore adds additional comments to the function > header to

Re: [PATCH v7 2/6] rcu/segcblist: Add counters to segcblist datastructure

2020-10-15 Thread Frederic Weisbecker
On Wed, Oct 14, 2020 at 08:22:57PM -0400, Joel Fernandes (Google) wrote: > Add counting of segment lengths of segmented callback list. > > This will be useful for a number of things such as knowing how big the > ready-to-execute segment have gotten. The immediate benefit is ability > to trace how

Re: [patch 1/2] nohz: only wakeup a single target cpu when kicking a task

2020-10-14 Thread Frederic Weisbecker
On Wed, Oct 14, 2020 at 10:33:21AM +0200, Peter Zijlstra wrote: > On Tue, Oct 13, 2020 at 02:13:28PM -0300, Marcelo Tosatti wrote: > > > > Yes but if the task isn't running, run_posix_cpu_timers() doesn't have > > > anything to elapse. So indeed we can spare the IPI if the task is not > > >

Re: [PATCH V7 4/4] softirq: Allow early break the softirq processing loop

2020-10-13 Thread Frederic Weisbecker
On Fri, Oct 09, 2020 at 04:01:39PM +0100, Qais Yousef wrote: > On 09/29/20 13:44, Frederic Weisbecker wrote: > > > that will delay the net_rx/tx softirq to process, Peter's branch > > > maybe can slove > > > the problem > > > git://git.kernel.org/pu

Re: [PATCH v6 2/4] rcu/segcblist: Add counters to segcblist datastructure

2020-10-12 Thread Frederic Weisbecker
On Wed, Sep 23, 2020 at 11:22:09AM -0400, Joel Fernandes (Google) wrote: > +/* Return number of callbacks in a segment of the segmented callback list. */ > +static void rcu_segcblist_add_seglen(struct rcu_segcblist *rsclp, int seg, > long v) > +{ > +#ifdef CONFIG_RCU_NOCB_CPU > +

Re: [PATCH v6 1/4] rcu/tree: Make rcu_do_batch count how many callbacks were executed

2020-10-09 Thread Frederic Weisbecker
d in s/have/how > rcu_do_batch() itself, and uses that to update the per-CPU segcb list's ->len > field, without relying on the negativity of rcl->len. > > Signed-off-by: Joel Fernandes (Google) Reviewed-by: Frederic Weisbecker Thanks.

Re: [patch 1/2] nohz: only wakeup a single target cpu when kicking a task

2020-10-08 Thread Frederic Weisbecker
ordering of writes > > > to task->cpu and task->tick_dep_mask. > > > > > > From: Frederic Weisbecker > > > Suggested-by: Peter Zijlstra > > > Signed-off-by: Frederic Weisbecker > > > Signed-off-by: Marcelo Tosatti > > > > >

Re: [patch 1/2] nohz: only wakeup a single target cpu when kicking a task

2020-10-08 Thread Frederic Weisbecker
On Thu, Oct 08, 2020 at 05:28:44PM +0200, Peter Zijlstra wrote: > On Thu, Oct 08, 2020 at 10:59:40AM -0400, Peter Xu wrote: > > On Wed, Oct 07, 2020 at 03:01:52PM -0300, Marcelo Tosatti wrote: > > > +static void tick_nohz_kick_task(struct task_struct *tsk) > > > +{ > > > + int cpu = task_cpu(tsk);

Re: [PATCH v2] rcu/tree: nocb: Avoid raising softirq when there are ready to execute CBs

2020-10-07 Thread Frederic Weisbecker
not to invoke RCU core processing to the time when the ready callbacks > were invoked by the rcuoc kthread. This provides further evidence that > there is no need to invoke rcu_core() for offloaded callbacks that are > ready to invoke. > > Cc: Neeraj Upadhyay > Signed-off-by: Joel Fernandes (Google) > Signed-off-by: Paul E. McKenney Reviewed-by: Frederic Weisbecker Thanks!

Re: [EXT] Re: [PATCH v4 10/13] task_isolation: don't interrupt CPUs with tick_nohz_full_kick_cpu()

2020-10-06 Thread Frederic Weisbecker
On Sun, Oct 04, 2020 at 03:22:09PM +, Alex Belits wrote: > > On Thu, 2020-10-01 at 16:44 +0200, Frederic Weisbecker wrote: > > > @@ -268,7 +269,8 @@ static void tick_nohz_full_kick(void) > > > */ > > > void tick_nohz_full_kick_cpu(int cpu) > > &

Re: [EXT] Re: [PATCH v4 03/13] task_isolation: userspace hard isolation from kernel

2020-10-06 Thread Frederic Weisbecker
On Mon, Oct 05, 2020 at 02:52:49PM -0400, Nitesh Narayan Lal wrote: > > On 10/4/20 7:14 PM, Frederic Weisbecker wrote: > > On Sun, Oct 04, 2020 at 02:44:39PM +, Alex Belits wrote: > >> On Thu, 2020-10-01 at 15:56 +0200, Frederic Weisbecker wrote:

Re: [PATCH 3/5] sched: Detect call to schedule from critical entry code

2020-10-05 Thread Frederic Weisbecker
On Mon, Oct 05, 2020 at 01:23:53PM +0200, Peter Zijlstra wrote: > On Mon, Oct 05, 2020 at 12:49:17PM +0200, Frederic Weisbecker wrote: > > Detect calls to schedule() between user_enter() and user_exit(). Those > > are symptoms of early entry code that either forgot to

[PATCH 2/5] context_tracking: Don't implement exception_enter/exit() on CONFIG_HAVE_CONTEXT_TRACKING_OFFSTACK

2020-10-05 Thread Frederic Weisbecker
explicitly annotated. Hence the exception_enter()/exception_exit() couple doesn't need to be implemented in this case. Signed-off-by: Frederic Weisbecker Cc: Marcelo Tosatti Cc: Paul E. McKenney Cc: Peter Zijlstra Cc: Phil Auld Cc: Thomas Gleixner --- include/linux/context_tracking.h | 6 -- 1

[PATCH 3/5] sched: Detect call to schedule from critical entry code

2020-10-05 Thread Frederic Weisbecker
. Signed-off-by: Frederic Weisbecker Cc: Marcelo Tosatti Cc: Paul E. McKenney Cc: Peter Zijlstra Cc: Phil Auld Cc: Thomas Gleixner --- kernel/sched/core.c | 1 + 1 file changed, 1 insertion(+) diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 2d95dc3f4644..d31a79e073e3 100644

[PATCH 4/5] context_tracking: Only define schedule_user() on !HAVE_CONTEXT_TRACKING_OFFSTACK archs

2020-10-05 Thread Frederic Weisbecker
tracking state had to be saved on the task stack and set back to CONTEXT_KERNEL temporarily in order to safely switch to another task. Only a few archs use it now and those implementing HAVE_CONTEXT_TRACKING_OFFSTACK definetly can't rely on it. Signed-off-by: Frederic Weisbecker Cc: Marcelo

[PATCH 1/5] context_tracking: Introduce HAVE_CONTEXT_TRACKING_OFFSTACK

2020-10-05 Thread Frederic Weisbecker
removed and we can now get rid of these workarounds in this architecture. Create a Kconfig feature to express this achievement. Signed-off-by: Frederic Weisbecker Cc: Marcelo Tosatti Cc: Paul E. McKenney Cc: Peter Zijlstra Cc: Phil Auld Cc: Thomas Gleixner --- arch/Kconfig | 17

[PATCH 5/5] x86: Support HAVE_CONTEXT_TRACKING_OFFSTACK

2020-10-05 Thread Frederic Weisbecker
() anymore and has therefore earned CONFIG_HAVE_CONTEXT_TRACKING_OFFSTACK. Signed-off-by: Frederic Weisbecker Cc: Marcelo Tosatti Cc: Paul E. McKenney Cc: Peter Zijlstra Cc: Phil Auld Cc: Thomas Gleixner --- arch/x86/Kconfig | 1 + 1 file changed, 1 insertion(+) diff --git a/arch/x86/Kconfig b

[PATCH 0/5] context_tracking: Flatter archs not using exception_enter/exit()

2020-10-05 Thread Frederic Weisbecker
meeting some requirements that at least x86 just achieved recently (I haven't checked other archs yet). git://git.kernel.org/pub/scm/linux/kernel/git/frederic/linux-dynticks.git core/isolation HEAD: d52271b6d5d02ead1916d65b013d11a7d90501b9 Thanks, Frederic --- Frederic Weisbecker (5

Re: [EXT] Re: [PATCH v4 03/13] task_isolation: userspace hard isolation from kernel

2020-10-04 Thread Frederic Weisbecker
On Sun, Oct 04, 2020 at 02:44:39PM +, Alex Belits wrote: > On Thu, 2020-10-01 at 15:56 +0200, Frederic Weisbecker wrote: > > External Email > > > > --- > > --- > > On Wed, Jul 22, 2020 at 02

Re: [PATCH v4 0/4] isolation: limit msix vectors to housekeeping CPUs

2020-10-01 Thread Frederic Weisbecker
| 2 +- > 4 files changed, 30 insertions(+), 2 deletions(-) Acked-by: Frederic Weisbecker Peter, if you're ok with the set, I guess this should go through the scheduler tree? Thanks.

Re: [PATCH v4 11/13] task_isolation: net: don't flush backlog on CPUs running isolated tasks

2020-10-01 Thread Frederic Weisbecker
On Wed, Jul 22, 2020 at 02:58:24PM +, Alex Belits wrote: > From: Yuri Norov > > If CPU runs isolated task, there's no any backlog on it, and > so we don't need to flush it. What guarantees that we have no backlog on it? > Currently flush_all_backlogs() > enqueues corresponding work on all

Re: [PATCH v4 10/13] task_isolation: don't interrupt CPUs with tick_nohz_full_kick_cpu()

2020-10-01 Thread Frederic Weisbecker
On Wed, Jul 22, 2020 at 02:57:33PM +, Alex Belits wrote: > From: Yuri Norov > > For nohz_full CPUs the desirable behavior is to receive interrupts > generated by tick_nohz_full_kick_cpu(). But for hard isolation it's > obviously not desirable because it breaks isolation. > > This patch adds

Re: [PATCH v4 03/13] task_isolation: userspace hard isolation from kernel

2020-10-01 Thread Frederic Weisbecker
On Wed, Jul 22, 2020 at 02:49:49PM +, Alex Belits wrote: > +/** > + * task_isolation_kernel_enter() - clear low-level task isolation flag > + * > + * This should be called immediately after entering kernel. > + */ > +static inline void task_isolation_kernel_enter(void) > +{ > + unsigned

Re: [PATCH v4 03/13] task_isolation: userspace hard isolation from kernel

2020-10-01 Thread Frederic Weisbecker
On Wed, Jul 22, 2020 at 02:49:49PM +, Alex Belits wrote: > +/* > + * Description of the last two tasks that ran isolated on a given CPU. > + * This is intended only for messages about isolation breaking. We > + * don't want any references to actual task while accessing this from > + * CPU that

Re: [PATCH V7 4/4] softirq: Allow early break the softirq processing loop

2020-09-29 Thread Frederic Weisbecker
On Mon, Sep 28, 2020 at 06:51:48PM +0800, jun qian wrote: > Frederic Weisbecker 于2020年9月25日周五 上午8:42写道: > > > > On Thu, Sep 24, 2020 at 05:37:42PM +0200, Thomas Gleixner wrote: > > > Subject: softirq; Prevent starvation of higher softirq vectors > > [...] > >

Re: [PATCH V7 4/4] softirq: Allow early break the softirq processing loop

2020-09-26 Thread Frederic Weisbecker
On Sat, Sep 26, 2020 at 12:42:25AM +0200, Thomas Gleixner wrote: > On Fri, Sep 25 2020 at 02:42, Frederic Weisbecker wrote: > > > On Thu, Sep 24, 2020 at 05:37:42PM +0200, Thomas Gleixner wrote: > >> Subject: softirq; Prevent starvation of h

Re: [PATCH V7 4/4] softirq: Allow early break the softirq processing loop

2020-09-24 Thread Frederic Weisbecker
On Thu, Sep 24, 2020 at 05:37:42PM +0200, Thomas Gleixner wrote: > Subject: softirq; Prevent starvation of higher softirq vectors [...] > + /* > + * Word swap pending to move the not yet handled bits of the previous > + * run first and then clear the duplicates in the newly raised

Re: [PATCH V7 4/4] softirq: Allow early break the softirq processing loop

2020-09-24 Thread Frederic Weisbecker
On Fri, Sep 25, 2020 at 01:08:11AM +0200, Frederic Weisbecker wrote: > On Thu, Sep 24, 2020 at 05:37:42PM +0200, Thomas Gleixner wrote: > > Subject: softirq; Prevent starvation of higher softirq vectors > > From: Thomas Gleixner > > Date: Thu, 24 Sep 2020 10:40:24 +0200

Re: [PATCH V7 4/4] softirq: Allow early break the softirq processing loop

2020-09-24 Thread Frederic Weisbecker
On Thu, Sep 24, 2020 at 05:37:42PM +0200, Thomas Gleixner wrote: > Subject: softirq; Prevent starvation of higher softirq vectors > From: Thomas Gleixner > Date: Thu, 24 Sep 2020 10:40:24 +0200 > > From: Thomas Gleixner > > The early termination of the softirq processing loop can lead to

Re: [PATCH v2 1/4] sched/isolation: API to get housekeeping online CPUs

2020-09-24 Thread Frederic Weisbecker
On Wed, Sep 23, 2020 at 02:11:23PM -0400, Nitesh Narayan Lal wrote: > Introduce a new API hk_num_online_cpus(), that can be used to > retrieve the number of online housekeeping CPUs that are meant to handle > managed IRQ jobs. > > This API is introduced for the drivers that were previously

<    1   2   3   4   5   6   7   8   9   10   >