On Fri, Aug 02, 2019 at 08:14:49AM -0700, Paul E. McKenney wrote:
> Use of the rcu_data structure's segmented ->cblist for no-CBs CPUs
> takes advantage of unrelated grace periods, thus reducing the memory
> footprint in the face of floods of call_rcu() invocations.  However,
> the ->cblist field is a more-complex rcu_segcblist structure which must
> be protected via locking.  Even though there are only three entities
> which can acquire this lock (the CPU invoking call_rcu(), the no-CBs
> grace-period kthread, and the no-CBs callbacks kthread), the contention
> on this lock is excessive under heavy stress.
> 
> This commit therefore greatly reduces contention by provisioning
> an rcu_cblist structure field named ->nocb_bypass within the
> rcu_data structure.  Each no-CBs CPU is permitted only a limited
> number of enqueues onto the ->cblist per jiffy, controlled by a new
> nocb_nobypass_lim_per_jiffy kernel boot parameter that defaults to
> about 16 enqueues per millisecond (16 * 1000 / HZ).  When that limit is
> exceeded, the CPU instead enqueues onto the new ->nocb_bypass.

Looks quite interesting. I am guessing the not-no-CB (regular) enqueues don't
need to use the same technique because both enqueues / callback execution are
happening on same CPU..

Still looking through patch but I understood the basic idea. Some nits below:

[snip]
> diff --git a/kernel/rcu/tree.h b/kernel/rcu/tree.h
> index 2c3e9068671c..e4df86db8137 100644
> --- a/kernel/rcu/tree.h
> +++ b/kernel/rcu/tree.h
> @@ -200,18 +200,26 @@ struct rcu_data {
>       atomic_t nocb_lock_contended;   /* Contention experienced. */
>       int nocb_defer_wakeup;          /* Defer wakeup of nocb_kthread. */
>       struct timer_list nocb_timer;   /* Enforce finite deferral. */
> +     unsigned long nocb_gp_adv_time; /* Last call_rcu() CB adv (jiffies). */
> +
> +     /* The following fields are used by call_rcu, hence own cacheline. */
> +     raw_spinlock_t nocb_bypass_lock ____cacheline_internodealigned_in_smp;
> +     struct rcu_cblist nocb_bypass;  /* Lock-contention-bypass CB list. */
> +     unsigned long nocb_bypass_first; /* Time (jiffies) of first enqueue. */
> +     unsigned long nocb_nobypass_last; /* Last ->cblist enqueue (jiffies). */
> +     int nocb_nobypass_count;        /* # ->cblist enqueues at ^^^ time. */

Can these and below fields be ifdef'd out if !CONFIG_RCU_NOCB_CPU so as to
keep the size of struct smaller for benefit of systems that don't use NOCB?


>  
>       /* The following fields are used by GP kthread, hence own cacheline. */
>       raw_spinlock_t nocb_gp_lock ____cacheline_internodealigned_in_smp;
> -     bool nocb_gp_sleep;
> -                                     /* Is the nocb GP thread asleep? */
> +     struct timer_list nocb_bypass_timer; /* Force nocb_bypass flush. */
> +     bool nocb_gp_sleep;             /* Is the nocb GP thread asleep? */

And these too, I think.


>       struct swait_queue_head nocb_gp_wq; /* For nocb kthreads to sleep on. */
>       bool nocb_cb_sleep;             /* Is the nocb CB thread asleep? */
>       struct task_struct *nocb_cb_kthread;
>       struct rcu_data *nocb_next_cb_rdp;
>                                       /* Next rcu_data in wakeup chain. */
>  
> -     /* The following fields are used by CB kthread, hence new cachline. */
> +     /* The following fields are used by CB kthread, hence new cacheline. */
>       struct rcu_data *nocb_gp_rdp ____cacheline_internodealigned_in_smp;
>                                       /* GP rdp takes GP-end wakeups. */
>  #endif /* #ifdef CONFIG_RCU_NOCB_CPU */
[snip]
> +static void rcu_nocb_try_flush_bypass(struct rcu_data *rdp, unsigned long j)
> +{
> +     rcu_lockdep_assert_cblist_protected(rdp);
> +     if (!rcu_segcblist_is_offloaded(&rdp->cblist) ||
> +         !rcu_nocb_bypass_trylock(rdp))
> +             return;
> +     WARN_ON_ONCE(!rcu_nocb_do_flush_bypass(rdp, NULL, j));
> +}
> +
> +/*
> + * See whether it is appropriate to use the ->nocb_bypass list in order
> + * to control contention on ->nocb_lock.  A limited number of direct
> + * enqueues are permitted into ->cblist per jiffy.  If ->nocb_bypass
> + * is non-empty, further callbacks must be placed into ->nocb_bypass,
> + * otherwise rcu_barrier() breaks.  Use rcu_nocb_flush_bypass() to switch
> + * back to direct use of ->cblist.  However, ->nocb_bypass should not be
> + * used if ->cblist is empty, because otherwise callbacks can be stranded
> + * on ->nocb_bypass because we cannot count on the current CPU ever again
> + * invoking call_rcu().  The general rule is that if ->nocb_bypass is
> + * non-empty, the corresponding no-CBs grace-period kthread must not be
> + * in an indefinite sleep state.
> + *
> + * Finally, it is not permitted to use the bypass during early boot,
> + * as doing so would confuse the auto-initialization code.  Besides
> + * which, there is no point in worrying about lock contention while
> + * there is only one CPU in operation.
> + */
> +static bool rcu_nocb_try_bypass(struct rcu_data *rdp, struct rcu_head *rhp,
> +                             bool *was_alldone, unsigned long flags)
> +{
> +     unsigned long c;
> +     unsigned long cur_gp_seq;
> +     unsigned long j = jiffies;
> +     long ncbs = rcu_cblist_n_cbs(&rdp->nocb_bypass);
> +
> +     if (!rcu_segcblist_is_offloaded(&rdp->cblist)) {
> +             *was_alldone = !rcu_segcblist_pend_cbs(&rdp->cblist);
> +             return false; /* Not offloaded, no bypassing. */
> +     }
> +     lockdep_assert_irqs_disabled();
> +
> +     // Don't use ->nocb_bypass during early boot.

Very minor nit: comment style should be /* */

thanks,

 - Joel

[snip]

Reply via email to