On Mon, Jun 10, 2024 at 02:05:30AM -0300, Leonardo Bras wrote:
> On Tue, Jun 04, 2024 at 03:23:52PM -0700, Paul E. McKenney wrote:
> > If a CPU is running either a userspace application or a guest OS in
> > nohz_full mode, it is possible for a system call to occur just as an
> > RCU grace period is starting.  If that CPU also has the scheduling-clock
> > tick enabled for any reason (such as a second runnable task), and if the
> > system was booted with rcutree.use_softirq=0, then RCU can add insult to
> > injury by awakening that CPU's rcuc kthread, resulting in yet another
> > task and yet more OS jitter due to switching to that task, running it,
> > and switching back.
> > 
> > In addition, in the common case where that system call is not of
> > excessively long duration, awakening the rcuc task is pointless.
> > This pointlessness is due to the fact that the CPU will enter an extended
> > quiescent state upon returning to the userspace application or guest OS.
> > In this case, the rcuc kthread cannot do anything that the main RCU
> > grace-period kthread cannot do on its behalf, at least if it is given
> > a few additional milliseconds (for example, given the time duration
> > specified by rcutexperiementationree.jiffies_till_first_fqs, give or take 
> > scheduling
> > delays).
> > 
> > This commit therefore adds a rcutree.nocb_patience_delay kernel boot
> > parameter that specifies the grace period age (in milliseconds)
> > before which RCU will refrain from awakening the rcuc kthread.
> > Preliminary experiementation suggests a value of 1000, that is,
> 
> Just a nit I found when cherry-picking here
> s/experiementation/experimentation/

Good eyes!  I will fix this on my next rebase, thank you!

                                                        Thanx, Paul

> Thanks!
> Leo
> 
> > one second.  Increasing rcutree.nocb_patience_delay will increase
> > grace-period latency and in turn increase memory footprint, so systems
> > with constrained memory might choose a smaller value.  Systems with
> > less-aggressive OS-jitter requirements might choose the default value
> > of zero, which keeps the traditional immediate-wakeup behavior, thus
> > avoiding increases in grace-period latency.
> > 
> > [ paulmck: Apply Leonardo Bras feedback.  ]
> > 
> > Link: 
> > https://lore.kernel.org/all/[email protected]/
> > 
> > Reported-by: Leonardo Bras <[email protected]>
> > Suggested-by: Leonardo Bras <[email protected]>
> > Suggested-by: Sean Christopherson <[email protected]>
> > Signed-off-by: Paul E. McKenney <[email protected]>
> > Reviewed-by: Leonardo Bras <[email protected]>
> > ---
> >  Documentation/admin-guide/kernel-parameters.txt |  8 ++++++++
> >  kernel/rcu/tree.c                               | 10 ++++++++--
> >  kernel/rcu/tree_plugin.h                        | 10 ++++++++++
> >  3 files changed, 26 insertions(+), 2 deletions(-)
> > 
> > diff --git a/Documentation/admin-guide/kernel-parameters.txt 
> > b/Documentation/admin-guide/kernel-parameters.txt
> > index 500cfa7762257..2d4a512cf1fc6 100644
> > --- a/Documentation/admin-guide/kernel-parameters.txt
> > +++ b/Documentation/admin-guide/kernel-parameters.txt
> > @@ -5018,6 +5018,14 @@
> >                     the ->nocb_bypass queue.  The definition of "too
> >                     many" is supplied by this kernel boot parameter.
> >  
> > +   rcutree.nocb_patience_delay= [KNL]
> > +                   On callback-offloaded (rcu_nocbs) CPUs, avoid
> > +                   disturbing RCU unless the grace period has
> > +                   reached the specified age in milliseconds.
> > +                   Defaults to zero.  Large values will be capped
> > +                   at five seconds.  All values will be rounded down
> > +                   to the nearest value representable by jiffies.
> > +
> >     rcutree.qhimark= [KNL]
> >                     Set threshold of queued RCU callbacks beyond which
> >                     batch limiting is disabled.
> > diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
> > index 35bf4a3736765..408b020c9501f 100644
> > --- a/kernel/rcu/tree.c
> > +++ b/kernel/rcu/tree.c
> > @@ -176,6 +176,9 @@ static int gp_init_delay;
> >  module_param(gp_init_delay, int, 0444);
> >  static int gp_cleanup_delay;
> >  module_param(gp_cleanup_delay, int, 0444);
> > +static int nocb_patience_delay;
> > +module_param(nocb_patience_delay, int, 0444);
> > +static int nocb_patience_delay_jiffies;
> >  
> >  // Add delay to rcu_read_unlock() for strict grace periods.
> >  static int rcu_unlock_delay;
> > @@ -4344,11 +4347,14 @@ static int rcu_pending(int user)
> >             return 1;
> >  
> >     /* Is this a nohz_full CPU in userspace or idle?  (Ignore RCU if so.) */
> > -   if ((user || rcu_is_cpu_rrupt_from_idle()) && rcu_nohz_full_cpu())
> > +   gp_in_progress = rcu_gp_in_progress();
> > +   if ((user || rcu_is_cpu_rrupt_from_idle() ||
> > +        (gp_in_progress &&
> > +         time_before(jiffies, READ_ONCE(rcu_state.gp_start) + 
> > nocb_patience_delay_jiffies))) &&
> > +       rcu_nohz_full_cpu())
> >             return 0;
> >  
> >     /* Is the RCU core waiting for a quiescent state from this CPU? */
> > -   gp_in_progress = rcu_gp_in_progress();
> >     if (rdp->core_needs_qs && !rdp->cpu_no_qs.b.norm && gp_in_progress)
> >             return 1;
> >  
> > diff --git a/kernel/rcu/tree_plugin.h b/kernel/rcu/tree_plugin.h
> > index 340bbefe5f652..31c539f09c150 100644
> > --- a/kernel/rcu/tree_plugin.h
> > +++ b/kernel/rcu/tree_plugin.h
> > @@ -93,6 +93,16 @@ static void __init rcu_bootup_announce_oddness(void)
> >             pr_info("\tRCU debug GP init slowdown %d jiffies.\n", 
> > gp_init_delay);
> >     if (gp_cleanup_delay)
> >             pr_info("\tRCU debug GP cleanup slowdown %d jiffies.\n", 
> > gp_cleanup_delay);
> > +   if (nocb_patience_delay < 0) {
> > +           pr_info("\tRCU NOCB CPU patience negative (%d), resetting to 
> > zero.\n", nocb_patience_delay);
> > +           nocb_patience_delay = 0;
> > +   } else if (nocb_patience_delay > 5 * MSEC_PER_SEC) {
> > +           pr_info("\tRCU NOCB CPU patience too large (%d), resetting to 
> > %ld.\n", nocb_patience_delay, 5 * MSEC_PER_SEC);
> > +           nocb_patience_delay = 5 * MSEC_PER_SEC;
> > +   } else if (nocb_patience_delay) {
> > +           pr_info("\tRCU NOCB CPU patience set to %d milliseconds.\n", 
> > nocb_patience_delay);
> > +   }
> > +   nocb_patience_delay_jiffies = msecs_to_jiffies(nocb_patience_delay);
> >     if (!use_softirq)
> >             pr_info("\tRCU_SOFTIRQ processing moved to rcuc kthreads.\n");
> >     if (IS_ENABLED(CONFIG_RCU_EQS_DEBUG))
> > -- 
> > 2.40.1
> > 
> 

Reply via email to