On Wed, May 27, 2020 at 12:39:14PM -0700, Paul E. McKenney wrote: > On Wed, May 27, 2020 at 07:12:36PM +0200, Peter Zijlstra wrote: > > On Wed, May 27, 2020 at 06:35:43PM +0200, Peter Zijlstra wrote: > > > Right, I went though them, didn't find anything obvious amiss. OK, let > > > me do a nicer patch. > > > > something like so then? > > > > --- > > Subject: rcu: Allow for smp_call_function() running callbacks from idle > > > > Current RCU hard relies on smp_call_function() callbacks running from > > interrupt context. A pending optimization is going to break that, it > > will allow idle CPUs to run the callbacks from the idle loop. This > > avoids raising the IPI on the requesting CPU and avoids handling an > > exception on the receiving CPU. > > > > Change rcu_is_cpu_rrupt_from_idle() to also accept task context, > > provided it is the idle task. > > > > Signed-off-by: Peter Zijlstra (Intel) <pet...@infradead.org> > > Looks good to me! > > Reviewed-by: Paul E. McKenney <paul...@kernel.org>
Reviewed-by: Joel Fernandes (Google) <j...@joelfernandes.org> thanks, - Joel > > > --- > > kernel/rcu/tree.c | 25 +++++++++++++++++++------ > > kernel/sched/idle.c | 4 ++++ > > 2 files changed, 23 insertions(+), 6 deletions(-) > > > > diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c > > index d8e9dbbefcfa..c716eadc7617 100644 > > --- a/kernel/rcu/tree.c > > +++ b/kernel/rcu/tree.c > > @@ -418,16 +418,23 @@ void rcu_momentary_dyntick_idle(void) > > EXPORT_SYMBOL_GPL(rcu_momentary_dyntick_idle); > > > > /** > > - * rcu_is_cpu_rrupt_from_idle - see if interrupted from idle > > + * rcu_is_cpu_rrupt_from_idle - see if 'interrupted' from idle > > * > > * If the current CPU is idle and running at a first-level (not nested) > > - * interrupt from idle, return true. The caller must have at least > > - * disabled preemption. > > + * interrupt, or directly, from idle, return true. > > + * > > + * The caller must have at least disabled IRQs. > > */ > > static int rcu_is_cpu_rrupt_from_idle(void) > > { > > - /* Called only from within the scheduling-clock interrupt */ > > - lockdep_assert_in_irq(); > > + long nesting; > > + > > + /* > > + * Usually called from the tick; but also used from smp_function_call() > > + * for expedited grace periods. This latter can result in running from > > + * the idle task, instead of an actual IPI. > > + */ > > + lockdep_assert_irqs_disabled(); > > > > /* Check for counter underflows */ > > RCU_LOCKDEP_WARN(__this_cpu_read(rcu_data.dynticks_nesting) < 0, > > @@ -436,9 +443,15 @@ static int rcu_is_cpu_rrupt_from_idle(void) > > "RCU dynticks_nmi_nesting counter underflow/zero!"); > > > > /* Are we at first interrupt nesting level? */ > > - if (__this_cpu_read(rcu_data.dynticks_nmi_nesting) != 1) > > + nesting = __this_cpu_read(rcu_data.dynticks_nmi_nesting); > > + if (nesting > 1) > > return false; > > > > + /* > > + * If we're not in an interrupt, we must be in the idle task! > > + */ > > + WARN_ON_ONCE(!nesting && !is_idle_task(current)); > > + > > /* Does CPU appear to be idle from an RCU standpoint? */ > > return __this_cpu_read(rcu_data.dynticks_nesting) == 0; > > } > > diff --git a/kernel/sched/idle.c b/kernel/sched/idle.c > > index e9cef84c2b70..05deb81bb3e3 100644 > > --- a/kernel/sched/idle.c > > +++ b/kernel/sched/idle.c > > @@ -289,6 +289,10 @@ static void do_idle(void) > > */ > > smp_mb__after_atomic(); > > > > + /* > > + * RCU relies on this call to be done outside of an RCU read-side > > + * critical section. > > + */ > > flush_smp_call_function_from_idle(); > > schedule_idle(); > >