for the state machine that will carry up all the steps to
enforce correctness while serving callbacks processing all along.
Inspired-by: Paul E. McKenney
Signed-off-by: Frederic Weisbecker
Cc: Paul E. McKenney
Cc: Josh Triplett
Cc: Steven Rostedt
Cc: Mathieu Desnoyers
Cc: Lai Jiangshan
Cc: Joel
Set SEGCBLIST_SOFTIRQ_ONLY once everything is settled. After that, the
callbacks are handled locklessly and locally.
Inspired-by: Paul E. McKenney
Signed-off-by: Frederic Weisbecker
Cc: Paul E. McKenney
Cc: Josh Triplett
Cc: Steven Rostedt
Cc: Mathieu Desnoyers
Cc: Lai Jiangshan
Cc: Joel
notify the de-offloading worker so that it can resume
the de-offloading while being sure that callbacks won't be handled
remotely anymore.
Inspired-by: Paul E. McKenney
Signed-off-by: Frederic Weisbecker
Cc: Paul E. McKenney
Cc: Josh Triplett
Cc: Steven Rostedt
Cc: Mathieu Desnoyers
Cc: Lai
The local callbacks processing checks if some callbacks need
acceleration. Keep that behaviour under nocb lock protection when
rcu_core() executes concurrently with GP/CB kthreads.
Inspired-by: Paul E. McKenney
Signed-off-by: Frederic Weisbecker
Cc: Paul E. McKenney
Cc: Josh Triplett
Cc
must notify the de-offloading worker so that it can resume
the de-offloading while being sure that callbacks won't be handled
remotely anymore.
Inspired-by: Paul E. McKenney
Signed-off-by: Frederic Weisbecker
Cc: Paul E. McKenney
Cc: Josh Triplett
Cc: Steven Rostedt
Cc: Mathieu Desnoyers
Cc
ill passes TREE01 (but I had to fight!)
git://git.kernel.org/pub/scm/linux/kernel/git/frederic/linux-dynticks.git
rcu/nocb-toggle-v4
HEAD: 579e15efa48fb6fc4ecf14961804051f385807fe
Thanks,
Frederic
---
Frederic Weisbecker (19):
rcu/nocb: Turn enabled/offload states into a c
On Wed, Nov 11, 2020 at 03:34:58PM +0100, Peter Zijlstra wrote:
> On Tue, Oct 27, 2020 at 04:08:26PM +0100, Frederic Weisbecker wrote:
> > schedule_user() was traditionally used by the entry code's tail to
> > preempt userspace after the call to user_enter(). Indeed the call to
On Wed, Nov 11, 2020 at 03:32:18PM +0100, Peter Zijlstra wrote:
> On Tue, Oct 27, 2020 at 04:08:24PM +0100, Frederic Weisbecker wrote:
> > An architecture that provides this Kconfig feature doesn't need to
> > store the context tracking state on the task stack because its entry
&g
On Tue, Nov 10, 2020 at 11:32:21AM +0100, Peter Zijlstra wrote:
> On Tue, Nov 10, 2020 at 01:56:08AM +0100, Frederic Weisbecker wrote:
> > [convert from static key to static call, only define static call when
> > PREEMPT_DYNAMIC]
>
> > noinstr void irqentry_e
On Tue, Nov 10, 2020 at 11:13:07AM +0100, Peter Zijlstra wrote:
> On Tue, Nov 10, 2020 at 10:55:15AM +0100, Peter Zijlstra wrote:
> > On Tue, Nov 10, 2020 at 01:56:03AM +0100, Frederic Weisbecker wrote:
> >
> > > [fweisbec: s/disp16/data16, integrate into text_get_insn(),
On Tue, Nov 10, 2020 at 10:55:15AM +0100, Peter Zijlstra wrote:
> On Tue, Nov 10, 2020 at 01:56:03AM +0100, Frederic Weisbecker wrote:
> > diff --git a/arch/x86/kernel/alternative.c b/arch/x86/kernel/alternative.c
> > index 2400ad62f330..37592f576a10 100644
> > --- a/arch/x86
Molnar
Cc: Michal Hocko
Cc: Paul E. McKenney
Signed-off-by: Frederic Weisbecker
---
include/linux/static_call.h | 30
include/linux/static_call_types.h | 33 +++
2 files changed, 33 insertions(+), 30 deletions(-)
diff --git a/include
Molnar
Cc: Michal Hocko
Cc: Paul E. McKenney
[fweisbec: s/disp16/data16, integrate into text_get_insn(), elaborate
comment on the resulting insn, emulate on int3 trap, provide validation,
uninline __static_call_return0() for HAVE_STATIC_CALL]
Signed-off-by: Frederic Weisbecker
---
arch/x86
() /
__preempt_schedule_notrace_function()).
Suggested-by: Peter Zijlstra
Signed-off-by: Michal Hocko
Cc: Peter Zijlstra (Intel)
Cc: Thomas Gleixner
Cc: Mel Gorman
Cc: Ingo Molnar
Cc: Paul E. McKenney
[Added documentation, reorganize dependencies on top of static call,
etc...]
Signed-off-by: Frederic Weisbecker
h provided wrapper, if any.
Signed-off-by: Peter Zijlstra (Intel)
Cc: Thomas Gleixner
Cc: Mel Gorman
Cc: Ingo Molnar
Cc: Michal Hocko
Cc: Paul E. McKenney
[only define static calls when PREEMPT_DYNAMIC, make it less dependent
on x86 with __preempt_schedule_func()]
Signed-off-by: Frederic Weisbecker
This is a reworked version of what came out of the debate between Michal
Hocko and Peter Zijlstra in order to tune the preemption mode from
kernel parameters, see v2 in:
https://lore.kernel.org/lkml/20201009122926.29962-1-mho...@kernel.org/
I mostly fetched the raw diff from Peter's proof of
n't passed.
Signed-off-by: Peter Zijlstra (Intel)
Cc: Thomas Gleixner
Cc: Mel Gorman
Cc: Ingo Molnar
Cc: Michal Hocko
Cc: Paul E. McKenney
[convert from static key to static call, only define static call when
PREEMPT_DYNAMIC]
Signed-off-by: Frederic Weisbecker
---
include/linux/entry-com
Signed-off-by: Frederic Weisbecker
---
kernel/sched/core.c | 67 -
1 file changed, 66 insertions(+), 1 deletion(-)
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 6715caa17ea7..84ac05d2df3a 100644
--- a/kernel/sched/core.c
+++ b/kernel/sc
heir calls are
ignored when preempt= isn't passed.
Signed-off-by: Peter Zijlstra (Intel)
Cc: Thomas Gleixner
Cc: Mel Gorman
Cc: Ingo Molnar
Cc: Michal Hocko
Cc: Paul E. McKenney
[branch might_resched() directly to __cond_resched(), only define static
calls when PREEMPT_DYNAMIC]
Signed-off-by
On Wed, Nov 04, 2020 at 10:42:09PM +0800, Boqun Feng wrote:
> On Wed, Nov 04, 2020 at 03:31:35PM +0100, Frederic Weisbecker wrote:
> [...]
> > >
> > > > + rcu_segcblist_offload(cblist, false);
> > > > + raw_spin_unlock_rcu_node(rnp);
> >
On Wed, Nov 04, 2020 at 06:08:07AM -0800, Paul E. McKenney wrote:
> On Tue, Nov 03, 2020 at 04:17:31PM +0100, Frederic Weisbecker wrote:
> > On Tue, Nov 03, 2020 at 09:26:00AM -0500, Joel Fernandes (Google) wrote:
> > > +/*
> > > + * Return how many CBs each segment alo
On Mon, Nov 02, 2020 at 09:38:24PM +0800, Boqun Feng wrote:
> Hi Frederic,
>
> Could you copy the r...@vger.kernel.org if you have another version, it
> will help RCU hobbyists like me to catch up news in RCU, thanks! ;-)
Sure! Will do!
> > +static int __rcu_nocb_rdp_deoffload(struct rcu_data
On Tue, Nov 03, 2020 at 10:07:38AM -0500, Joel Fernandes wrote:
> On Tue, Nov 03, 2020 at 03:47:14PM +0100, Frederic Weisbecker wrote:
> > On Tue, Nov 03, 2020 at 09:25:59AM -0500, Joel Fernandes (Google) wrote:
> > > With earlier patches, the negative counting of the unsegmented
On Tue, Nov 03, 2020 at 09:26:00AM -0500, Joel Fernandes (Google) wrote:
> +/*
> + * Return how many CBs each segment along with their gp_seq values.
> + *
> + * This function is O(N) where N is the number of segments. Only used from
> + * tracing code which is usually disabled in production.
> +
to adjust
> the segmented list's length.
>
> Reviewed-by: Frederic Weisbecker
> Suggested-by: Frederic Weisbecker
> Signed-off-by: Joel Fernandes (Google)
This breaks bisection, you need to either fix up the previous patch
by adding this diff inside or better yet: expand what you did
in &quo
On Wed, Oct 28, 2020 at 12:07:11PM +0100, Peter Zijlstra wrote:
This may need a changelog :-)
>
> Signed-off-by: Peter Zijlstra (Intel)
> ---
> kernel/Makefile |1 +
> kernel/irq_work.c |3 +++
> 2 files changed, 4 insertions(+)
>
> --- a/kernel/Makefile
> +++ b/kernel/Makefile
>
On Wed, Oct 28, 2020 at 12:07:09PM +0100, Peter Zijlstra wrote:
> Get rid of the __call_single_node union and cleanup the API a little
> to avoid external code relying on the structure layout as much.
>
> Signed-off-by: Peter Zijlstra (Intel)
Reviewed-by: Frederic Weisbecker
_flags), },
> + .func = (_func),\
> +}
Reviewed-by: Frederic Weisbecker
Thanks.
On Wed, Oct 28, 2020 at 03:53:24PM +0100, Peter Zijlstra wrote:
> On Wed, Oct 28, 2020 at 02:40:46PM +0100, Frederic Weisbecker wrote:
> > On Wed, Oct 28, 2020 at 12:07:12PM +0100, Peter Zijlstra wrote:
> > > While the traditional irq_work relies on the ability to self-IPI, it
On Wed, Oct 28, 2020 at 12:07:12PM +0100, Peter Zijlstra wrote:
> While the traditional irq_work relies on the ability to self-IPI, it
> makes sense to provide an unconditional irq_work_queue_remote()
> interface.
We may need a reason as well here.
> --- a/kernel/rcu/tree.c
> +++
On Wed, Oct 28, 2020 at 12:07:10PM +0100, Peter Zijlstra wrote:
> Trade one atomic op for a full memory barrier.
>
> Signed-off-by: Peter Zijlstra (Intel)
Reviewed-by: Frederic Weisbecker
()
anymore and has therefore earned CONFIG_HAVE_CONTEXT_TRACKING_OFFSTACK.
Signed-off-by: Frederic Weisbecker
Cc: Marcelo Tosatti
Cc: Paul E. McKenney
Cc: Peter Zijlstra
Cc: Phil Auld
Cc: Thomas Gleixner
---
arch/x86/Kconfig | 1 +
1 file changed, 1 insertion(+)
diff --git a/arch/x86/Kconfig b
/frederic/linux-dynticks.git
core/isolation-v2
HEAD: 79f60f3dd0e0aea8b17c825371d8697444ae5faf
Thanks,
Frederic
---
Frederic Weisbecker (5):
context_tracking: Introduce HAVE_CONTEXT_TRACKING_OFFSTACK
context_tracking: Don't implement exception_enter/exit
tracking state had to be saved on the task stack
and set back to CONTEXT_KERNEL temporarily in order to safely switch to
another task.
Only a few archs use it now and those implementing
HAVE_CONTEXT_TRACKING_OFFSTACK definetly can't rely on it.
Signed-off-by: Frederic Weisbecker
Cc: Marcelo
removed and we can now get rid of these workarounds
in this architecture.
Create a Kconfig feature to express this achievement.
Signed-off-by: Frederic Weisbecker
Cc: Marcelo Tosatti
Cc: Paul E. McKenney
Cc: Peter Zijlstra
Cc: Phil Auld
Cc: Thomas Gleixner
---
arch/Kconfig | 17
.
Signed-off-by: Frederic Weisbecker
Cc: Marcelo Tosatti
Cc: Paul E. McKenney
Cc: Peter Zijlstra
Cc: Phil Auld
Cc: Thomas Gleixner
---
kernel/sched/core.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index d2003a7d5ab5..c23d7cb5aee3 100644
explicitly annotated.
Hence the exception_enter()/exception_exit() couple doesn't need to be
implemented in this case.
Signed-off-by: Frederic Weisbecker
Cc: Marcelo Tosatti
Cc: Paul E. McKenney
Cc: Peter Zijlstra
Cc: Phil Auld
Cc: Thomas Gleixner
---
include/linux/context_tracking.h | 6 --
1
On Fri, Oct 09, 2020 at 07:45:54PM +0200, Peter Zijlstra wrote:
> +DEFINE_STATIC_KEY_TRUE(irq_preemption_key);
> +
> +/*
> + * SC:cond_resched
> + * SC:might_resched
> + * SC:preempt_schedule
> + * SC:preempt_schedule_notrace
> + * SB:irq_preemption_key
> + *
> + *
> + * ZERO
> + * cond_resched
On Thu, Oct 15, 2020 at 12:12:35PM +0200, Peter Zijlstra wrote:
> On Thu, Oct 15, 2020 at 01:40:53AM +0200, Frederic Weisbecker wrote:
> > > re tick_nohz_task_switch() being placed wrong, it should probably be
> > > placed before finish_lock_switch(). Something like so.
>
On Wed, Oct 07, 2020 at 10:34:36AM +0100, Mel Gorman wrote:
> On Mon, Oct 05, 2020 at 02:26:48PM +0200, Frederic Weisbecker wrote:
> > On Mon, Oct 05, 2020 at 01:23:53PM +0200, Peter Zijlstra wrote:
> > > On Mon, Oct 05, 2020 at 12:49:17PM +0200, Frederic Weisbecker wrote:
&g
65;6003;1cOn Wed, Oct 21, 2020 at 03:08:12PM -0400, Joel Fernandes (Google)
wrote:
> This memory barrier is not needed as rcu_segcblist_add_len() already
> includes a memory barrier *before* the length of the list is updated.
*before* and *after*.
As you have both cases below.
Thanks
>
>
On Wed, Oct 21, 2020 at 03:08:10PM -0400, Joel Fernandes (Google) wrote:
> Track how the segcb list changes before/after acceleration, during
> queuing and during dequeuing.
>
> This has proved useful to discover an optimization to avoid unwanted GP
> requests when there are no callbacks
On Mon, Oct 26, 2020 at 01:45:57AM -0400, Joel Fernandes wrote:
> On Mon, Oct 26, 2020 at 01:50:58AM +0100, Frederic Weisbecker wrote:
> > On Wed, Oct 21, 2020 at 03:08:09PM -0400, Joel Fernandes (Google) wrote:
> > > bool rcu_segcblist_accelerate(struct rcu_segcblist *rs
On Mon, Oct 26, 2020 at 01:40:43AM -0400, Joel Fernandes wrote:
> On Mon, Oct 26, 2020 at 01:32:12AM +0100, Frederic Weisbecker wrote:
> > You seem to have forgotten the suggestion?
> >
> > rclp->len += rcu_segcblist_get_seglen(rsclp, i)
>
> I decided to keep it
On Wed, Oct 21, 2020 at 03:08:09PM -0400, Joel Fernandes (Google) wrote:
> bool rcu_segcblist_accelerate(struct rcu_segcblist *rsclp, unsigned long seq)
> {
> - int i;
> + int i, j;
>
> WARN_ON_ONCE(!rcu_segcblist_is_enabled(rsclp));
> if (rcu_segcblist_restempty(rsclp,
On Wed, Oct 21, 2020 at 03:08:09PM -0400, Joel Fernandes (Google) wrote:
> @@ -307,6 +317,7 @@ void rcu_segcblist_extract_done_cbs(struct rcu_segcblist
> *rsclp,
>
> if (!rcu_segcblist_ready_cbs(rsclp))
> return; /* Nothing to do. */
> + rclp->len =
This simplify the usage of this API and avoid checking the kernel
config from the callers.
Suggested-by: Paul E. McKenney
Signed-off-by: Frederic Weisbecker
Cc: Paul E. McKenney
Cc: Josh Triplett
Cc: Steven Rostedt
Cc: Mathieu Desnoyers
Cc: Lai Jiangshan
Cc: Joel Fernandes
Cc: Neeraj
The local callbacks processing checks if some callbacks need
acceleration. Keep that behaviour under nocb lock protection when
rcu_core() executes concurrently with GP/CB kthreads.
Inspired-by: Paul E. McKenney
Signed-off-by: Frederic Weisbecker
Cc: Paul E. McKenney
Cc: Josh Triplett
Cc
Add periodic toggling of 7 CPUs over 8 every second in order to test
NOCB toggle code. Choose TREE01 for that as it's already testing nocb.
Inspired-by: Paul E. McKenney
Signed-off-by: Frederic Weisbecker
Cc: Paul E. McKenney
Cc: Josh Triplett
Cc: Steven Rostedt
Cc: Mathieu Desnoyers
Cc
-by: Paul E. McKenney
Signed-off-by: Frederic Weisbecker
Cc: Paul E. McKenney
Cc: Josh Triplett
Cc: Steven Rostedt
Cc: Mathieu Desnoyers
Cc: Lai Jiangshan
Cc: Joel Fernandes
Cc: Neeraj Upadhyay
---
kernel/rcu/tree.c | 12 +---
1 file changed, 9 insertions(+), 3 deletions(-)
diff
From: "Paul E. McKenney"
Frederic Weisbecker is adding the ability to change the rcu_nocbs state
of CPUs at runtime, that is, to offload and deoffload their RCU callback
processing without the need to reboot. As the old saying goes, "if it
ain't tested, it don't work", so t
Set SEGCBLIST_SOFTIRQ_ONLY once everything is settled. After that, the
callbacks are handled locklessly and locally.
Inspired-by: Paul E. McKenney
Signed-off-by: Frederic Weisbecker
Cc: Paul E. McKenney
Cc: Josh Triplett
Cc: Steven Rostedt
Cc: Mathieu Desnoyers
Cc: Lai Jiangshan
Cc: Joel
rcu_do_batch() will be callable concurrently by softirqs and offloaded
processing. So make sure we actually call cond resched only from the
offloaded context.
Inspired-by: Paul E. McKenney
Signed-off-by: Frederic Weisbecker
Cc: Paul E. McKenney
Cc: Josh Triplett
Cc: Steven Rostedt
Cc
will be to wait for all pending callbacks
to be processed before completing a CPU down operation.
Suggested-by: Paul E. McKenney
Signed-off-by: Frederic Weisbecker
Cc: Paul E. McKenney
Cc: Josh Triplett
Cc: Steven Rostedt
Cc: Mathieu Desnoyers
Cc: Lai Jiangshan
Cc: Joel Fernandes
Cc
during these intermediate
states. Some pieces there may still be necessary.
Inspired-by: Paul E. McKenney
Signed-off-by: Frederic Weisbecker
Cc: Paul E. McKenney
Cc: Josh Triplett
Cc: Steven Rostedt
Cc: Mathieu Desnoyers
Cc: Lai Jiangshan
Cc: Joel Fernandes
Cc: Neeraj Upadhyay
---
kernel/rcu
Gather the segcblist properties in a common map to avoid spreading
booleans in the structure. And this prepares for the offloaded state to
be mutable on runtime.
Inspired-by: Paul E. McKenney
Signed-off-by: Frederic Weisbecker
Cc: Paul E. McKenney
Cc: Josh Triplett
Cc: Steven Rostedt
Cc
e5cc6634810985b405baca
Thanks,
Frederic
---
Frederic Weisbecker (15):
rcu: Implement rcu_segcblist_is_offloaded() config dependent
rcu: Turn enabled/offload states into a common flag
rcu: Provide basic callback offloading state machine bits
rcu/nocb: Always init seg
Make sure the nocb timer can't fire anymore before we reach the final
de-offload state. Spuriously waking up the GP kthread is no big deal but
we must prevent from executing the timer callback without nocb locking.
Inspired-by: Paul E. McKenney
Signed-off-by: Frederic Weisbecker
Cc: Paul E
stop processing the callbacks locally.
Ordering must be carefully enforced so that the callbacks that used to
be processed locally without locking must have their latest updates
visible by the time they get processed by the kthreads.
Inspired-by: Paul E. McKenney
Signed-off-by: Frederic Weisbecker
for the state machine that will carry up all the steps to
enforce correctness while serving callbacks processing all along.
Inspired-by: Paul E. McKenney
Signed-off-by: Frederic Weisbecker
Cc: Paul E. McKenney
Cc: Josh Triplett
Cc: Steven Rostedt
Cc: Mathieu Desnoyers
Cc: Lai Jiangshan
Cc: Joel
Make sure to handle the pending bypass queue before we switch to the
final de-offload state. We'll have to be careful and later set
SEGCBLIST_SOFTIRQ_ONLY before re-enabling again IRQs, or new bypass
callbacks could be queued in the meantine.
Inspired-by: Paul E. McKenney
Signed-off-by: Frederic
notify the de-offloading worker so that it can resume
the de-offloading while being sure that callbacks won't be handled
remotely anymore.
Inspired-by: Paul E. McKenney
Signed-off-by: Frederic Weisbecker
Cc: Paul E. McKenney
Cc: Josh Triplett
Cc: Steven Rostedt
Cc: Mathieu Desnoyers
Cc: Lai
must notify the de-offloading worker so that it can resume
the de-offloading while being sure that callbacks won't be handled
remotely anymore.
Inspired-by: Paul E. McKenney
Signed-off-by: Frederic Weisbecker
Cc: Paul E. McKenney
Cc: Josh Triplett
Cc: Steven Rostedt
Cc: Mathieu Desnoyers
Cc
es isolation for maintaining lower latency for the listed CPUs.
> >
> > Suggested-by: Frederic Weisbecker
Ah and yes there is this tag :-p
So that's my bad, I really thought this thing was about managed IRQ.
The problem is that I can't find a single documentation about them so I'm
On Tue, Oct 20, 2020 at 03:52:45PM -0300, Marcelo Tosatti wrote:
> On Thu, Oct 15, 2020 at 01:40:53AM +0200, Frederic Weisbecker wrote:
> > Alternatively, we could rely on p->on_rq which is set to TASK_ON_RQ_QUEUED
> > at wake up time, prior to the schedule() full
On Wed, Oct 21, 2020 at 11:33:14AM -0400, j...@joelfernandes.org wrote:
> On Thu, Oct 15, 2020 at 02:21:58PM +0200, Frederic Weisbecker wrote:
> > On Wed, Oct 14, 2020 at 08:22:57PM -0400, Joel Fernandes (Google) wrote:
> > > Add counting of segment lengths of segme
On Wed, Oct 21, 2020 at 11:57:04AM -0700, Joel Fernandes wrote:
> On Mon, Oct 19, 2020 at 5:37 AM Frederic Weisbecker
> wrote:
> > Now, reading the documentation of rcu_barrier() (thanks to you!):
> >
> > Pseudo-code using rcu_barrier() is as follows:
> >
On Mon, Oct 19, 2020 at 01:11:37PM +0200, Peter Zijlstra wrote:
> > > And what are the (desired) semantics vs hotplug? Using a cpumask without
> > > excluding hotplug is racy.
> >
> > The housekeeping_mask should still remain constant, isn't?
> > In any case, I can double check this.
>
> The
On Sat, Oct 17, 2020 at 08:35:56PM -0400, j...@joelfernandes.org wrote:
> On Sat, Oct 17, 2020 at 03:29:54PM +0200, Frederic Weisbecker wrote:
> > > C rcubarrier+ctrldep
> > >
> > > (*
> > > * Result: Never
> > > *
> > > *
On Fri, Oct 16, 2020 at 11:19:41PM -0400, j...@joelfernandes.org wrote:
> On Fri, Oct 16, 2020 at 09:27:53PM -0400, j...@joelfernandes.org wrote:
> [..]
> > > > + *
> > > > + * Memory barrier is needed after adding to length for the case
> > > > + * where length transitions from 0 -> 1. This is
On Wed, Oct 14, 2020 at 08:23:01PM -0400, Joel Fernandes (Google) wrote:
> Memory barriers are needed when updating the full length of the
> segcblist, however it is not fully clearly why one is needed before and
> after. This patch therefore adds additional comments to the function
> header to
On Wed, Oct 14, 2020 at 08:22:57PM -0400, Joel Fernandes (Google) wrote:
> Add counting of segment lengths of segmented callback list.
>
> This will be useful for a number of things such as knowing how big the
> ready-to-execute segment have gotten. The immediate benefit is ability
> to trace how
On Wed, Oct 14, 2020 at 10:33:21AM +0200, Peter Zijlstra wrote:
> On Tue, Oct 13, 2020 at 02:13:28PM -0300, Marcelo Tosatti wrote:
>
> > > Yes but if the task isn't running, run_posix_cpu_timers() doesn't have
> > > anything to elapse. So indeed we can spare the IPI if the task is not
> > >
On Fri, Oct 09, 2020 at 04:01:39PM +0100, Qais Yousef wrote:
> On 09/29/20 13:44, Frederic Weisbecker wrote:
> > > that will delay the net_rx/tx softirq to process, Peter's branch
> > > maybe can slove
> > > the problem
> > > git://git.kernel.org/pu
On Wed, Sep 23, 2020 at 11:22:09AM -0400, Joel Fernandes (Google) wrote:
> +/* Return number of callbacks in a segment of the segmented callback list. */
> +static void rcu_segcblist_add_seglen(struct rcu_segcblist *rsclp, int seg,
> long v)
> +{
> +#ifdef CONFIG_RCU_NOCB_CPU
> +
d in
s/have/how
> rcu_do_batch() itself, and uses that to update the per-CPU segcb list's ->len
> field, without relying on the negativity of rcl->len.
>
> Signed-off-by: Joel Fernandes (Google)
Reviewed-by: Frederic Weisbecker
Thanks.
ordering of writes
> > > to task->cpu and task->tick_dep_mask.
> > >
> > > From: Frederic Weisbecker
> > > Suggested-by: Peter Zijlstra
> > > Signed-off-by: Frederic Weisbecker
> > > Signed-off-by: Marcelo Tosatti
> > >
> >
On Thu, Oct 08, 2020 at 05:28:44PM +0200, Peter Zijlstra wrote:
> On Thu, Oct 08, 2020 at 10:59:40AM -0400, Peter Xu wrote:
> > On Wed, Oct 07, 2020 at 03:01:52PM -0300, Marcelo Tosatti wrote:
> > > +static void tick_nohz_kick_task(struct task_struct *tsk)
> > > +{
> > > + int cpu = task_cpu(tsk);
not to invoke RCU core processing to the time when the ready callbacks
> were invoked by the rcuoc kthread. This provides further evidence that
> there is no need to invoke rcu_core() for offloaded callbacks that are
> ready to invoke.
>
> Cc: Neeraj Upadhyay
> Signed-off-by: Joel Fernandes (Google)
> Signed-off-by: Paul E. McKenney
Reviewed-by: Frederic Weisbecker
Thanks!
On Sun, Oct 04, 2020 at 03:22:09PM +, Alex Belits wrote:
>
> On Thu, 2020-10-01 at 16:44 +0200, Frederic Weisbecker wrote:
> > > @@ -268,7 +269,8 @@ static void tick_nohz_full_kick(void)
> > > */
> > > void tick_nohz_full_kick_cpu(int cpu)
> > &
On Mon, Oct 05, 2020 at 02:52:49PM -0400, Nitesh Narayan Lal wrote:
>
> On 10/4/20 7:14 PM, Frederic Weisbecker wrote:
> > On Sun, Oct 04, 2020 at 02:44:39PM +, Alex Belits wrote:
> >> On Thu, 2020-10-01 at 15:56 +0200, Frederic Weisbecker wrote:
On Mon, Oct 05, 2020 at 01:23:53PM +0200, Peter Zijlstra wrote:
> On Mon, Oct 05, 2020 at 12:49:17PM +0200, Frederic Weisbecker wrote:
> > Detect calls to schedule() between user_enter() and user_exit(). Those
> > are symptoms of early entry code that either forgot to
explicitly annotated.
Hence the exception_enter()/exception_exit() couple doesn't need to be
implemented in this case.
Signed-off-by: Frederic Weisbecker
Cc: Marcelo Tosatti
Cc: Paul E. McKenney
Cc: Peter Zijlstra
Cc: Phil Auld
Cc: Thomas Gleixner
---
include/linux/context_tracking.h | 6 --
1
.
Signed-off-by: Frederic Weisbecker
Cc: Marcelo Tosatti
Cc: Paul E. McKenney
Cc: Peter Zijlstra
Cc: Phil Auld
Cc: Thomas Gleixner
---
kernel/sched/core.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 2d95dc3f4644..d31a79e073e3 100644
tracking state had to be saved on the task stack
and set back to CONTEXT_KERNEL temporarily in order to safely switch to
another task.
Only a few archs use it now and those implementing
HAVE_CONTEXT_TRACKING_OFFSTACK definetly can't rely on it.
Signed-off-by: Frederic Weisbecker
Cc: Marcelo
removed and we can now get rid of these workarounds
in this architecture.
Create a Kconfig feature to express this achievement.
Signed-off-by: Frederic Weisbecker
Cc: Marcelo Tosatti
Cc: Paul E. McKenney
Cc: Peter Zijlstra
Cc: Phil Auld
Cc: Thomas Gleixner
---
arch/Kconfig | 17
()
anymore and has therefore earned CONFIG_HAVE_CONTEXT_TRACKING_OFFSTACK.
Signed-off-by: Frederic Weisbecker
Cc: Marcelo Tosatti
Cc: Paul E. McKenney
Cc: Peter Zijlstra
Cc: Phil Auld
Cc: Thomas Gleixner
---
arch/x86/Kconfig | 1 +
1 file changed, 1 insertion(+)
diff --git a/arch/x86/Kconfig b
meeting some
requirements that at least x86 just achieved recently (I haven't checked
other archs yet).
git://git.kernel.org/pub/scm/linux/kernel/git/frederic/linux-dynticks.git
core/isolation
HEAD: d52271b6d5d02ead1916d65b013d11a7d90501b9
Thanks,
Frederic
---
Frederic Weisbecker (5
On Sun, Oct 04, 2020 at 02:44:39PM +, Alex Belits wrote:
> On Thu, 2020-10-01 at 15:56 +0200, Frederic Weisbecker wrote:
> > External Email
> >
> > ---
> > ---
> > On Wed, Jul 22, 2020 at 02
| 2 +-
> 4 files changed, 30 insertions(+), 2 deletions(-)
Acked-by: Frederic Weisbecker
Peter, if you're ok with the set, I guess this should go through
the scheduler tree?
Thanks.
On Wed, Jul 22, 2020 at 02:58:24PM +, Alex Belits wrote:
> From: Yuri Norov
>
> If CPU runs isolated task, there's no any backlog on it, and
> so we don't need to flush it.
What guarantees that we have no backlog on it?
> Currently flush_all_backlogs()
> enqueues corresponding work on all
On Wed, Jul 22, 2020 at 02:57:33PM +, Alex Belits wrote:
> From: Yuri Norov
>
> For nohz_full CPUs the desirable behavior is to receive interrupts
> generated by tick_nohz_full_kick_cpu(). But for hard isolation it's
> obviously not desirable because it breaks isolation.
>
> This patch adds
On Wed, Jul 22, 2020 at 02:49:49PM +, Alex Belits wrote:
> +/**
> + * task_isolation_kernel_enter() - clear low-level task isolation flag
> + *
> + * This should be called immediately after entering kernel.
> + */
> +static inline void task_isolation_kernel_enter(void)
> +{
> + unsigned
On Wed, Jul 22, 2020 at 02:49:49PM +, Alex Belits wrote:
> +/*
> + * Description of the last two tasks that ran isolated on a given CPU.
> + * This is intended only for messages about isolation breaking. We
> + * don't want any references to actual task while accessing this from
> + * CPU that
On Mon, Sep 28, 2020 at 06:51:48PM +0800, jun qian wrote:
> Frederic Weisbecker 于2020年9月25日周五 上午8:42写道:
> >
> > On Thu, Sep 24, 2020 at 05:37:42PM +0200, Thomas Gleixner wrote:
> > > Subject: softirq; Prevent starvation of higher softirq vectors
> > [...]
> >
On Sat, Sep 26, 2020 at 12:42:25AM +0200, Thomas Gleixner wrote:
> On Fri, Sep 25 2020 at 02:42, Frederic Weisbecker wrote:
>
> > On Thu, Sep 24, 2020 at 05:37:42PM +0200, Thomas Gleixner wrote:
> >> Subject: softirq; Prevent starvation of h
On Thu, Sep 24, 2020 at 05:37:42PM +0200, Thomas Gleixner wrote:
> Subject: softirq; Prevent starvation of higher softirq vectors
[...]
> + /*
> + * Word swap pending to move the not yet handled bits of the previous
> + * run first and then clear the duplicates in the newly raised
On Fri, Sep 25, 2020 at 01:08:11AM +0200, Frederic Weisbecker wrote:
> On Thu, Sep 24, 2020 at 05:37:42PM +0200, Thomas Gleixner wrote:
> > Subject: softirq; Prevent starvation of higher softirq vectors
> > From: Thomas Gleixner
> > Date: Thu, 24 Sep 2020 10:40:24 +0200
On Thu, Sep 24, 2020 at 05:37:42PM +0200, Thomas Gleixner wrote:
> Subject: softirq; Prevent starvation of higher softirq vectors
> From: Thomas Gleixner
> Date: Thu, 24 Sep 2020 10:40:24 +0200
>
> From: Thomas Gleixner
>
> The early termination of the softirq processing loop can lead to
On Wed, Sep 23, 2020 at 02:11:23PM -0400, Nitesh Narayan Lal wrote:
> Introduce a new API hk_num_online_cpus(), that can be used to
> retrieve the number of online housekeeping CPUs that are meant to handle
> managed IRQ jobs.
>
> This API is introduced for the drivers that were previously
301 - 400 of 8299 matches
Mail list logo