On Wed, Oct 14, 2020 at 08:22:57PM -0400, Joel Fernandes (Google) wrote:
> Add counting of segment lengths of segmented callback list.
> 
> This will be useful for a number of things such as knowing how big the
> ready-to-execute segment have gotten. The immediate benefit is ability
> to trace how the callbacks in the segmented callback list change.
> 
> Also this patch remove hacks related to using donecbs's ->len field as a
> temporary variable to save the segmented callback list's length. This cannot 
> be
> done anymore and is not needed.
> 
> Signed-off-by: Joel Fernandes (Google) <[email protected]>
> ---
>  include/linux/rcu_segcblist.h |   2 +
>  kernel/rcu/rcu_segcblist.c    | 133 +++++++++++++++++++++++-----------
>  kernel/rcu/rcu_segcblist.h    |   2 -
>  3 files changed, 92 insertions(+), 45 deletions(-)
> 
> diff --git a/include/linux/rcu_segcblist.h b/include/linux/rcu_segcblist.h
> index b36afe7b22c9..d462ae5e340a 100644
> --- a/include/linux/rcu_segcblist.h
> +++ b/include/linux/rcu_segcblist.h
> @@ -69,8 +69,10 @@ struct rcu_segcblist {
>       unsigned long gp_seq[RCU_CBLIST_NSEGS];
>  #ifdef CONFIG_RCU_NOCB_CPU
>       atomic_long_t len;
> +     atomic_long_t seglen[RCU_CBLIST_NSEGS];

Also does it really need to be atomic?

> @@ -245,7 +280,7 @@ void rcu_segcblist_enqueue(struct rcu_segcblist *rsclp,
>                          struct rcu_head *rhp)
>  {
>       rcu_segcblist_inc_len(rsclp);
> -     smp_mb(); /* Ensure counts are updated before callback is enqueued. */

That's a significant change that shouldn't be hidden and unexplained in an 
unrelated
patch or it may be easily missed. I'd suggest to move this line together in
"rcu/tree: Remove redundant smp_mb() in rcu_do_batch" (with the title updated 
perhaps)
and maybe put it in the beginning of the series?

> +     rcu_segcblist_inc_seglen(rsclp, RCU_NEXT_TAIL);
>       rhp->next = NULL;
>       WRITE_ONCE(*rsclp->tails[RCU_NEXT_TAIL], rhp);
>       WRITE_ONCE(rsclp->tails[RCU_NEXT_TAIL], &rhp->next);
[...]
> @@ -330,11 +353,16 @@ void rcu_segcblist_extract_pend_cbs(struct 
> rcu_segcblist *rsclp,
>  
>       if (!rcu_segcblist_pend_cbs(rsclp))
>               return; /* Nothing to do. */
> +     rclp->len = rcu_segcblist_get_seglen(rsclp, RCU_WAIT_TAIL) +
> +                 rcu_segcblist_get_seglen(rsclp, RCU_NEXT_READY_TAIL) +
> +                 rcu_segcblist_get_seglen(rsclp, RCU_NEXT_TAIL);
>       *rclp->tail = *rsclp->tails[RCU_DONE_TAIL];
>       rclp->tail = rsclp->tails[RCU_NEXT_TAIL];
>       WRITE_ONCE(*rsclp->tails[RCU_DONE_TAIL], NULL);
> -     for (i = RCU_DONE_TAIL + 1; i < RCU_CBLIST_NSEGS; i++)
> +     for (i = RCU_DONE_TAIL + 1; i < RCU_CBLIST_NSEGS; i++) {
>               WRITE_ONCE(rsclp->tails[i], rsclp->tails[RCU_DONE_TAIL]);
> +             rcu_segcblist_set_seglen(rsclp, i, 0);
> +     }

So, that's probably just a matter of personal preference, so feel free to
ignore but I'd rather do:

    rclp->len += rcu_segcblist_get_seglen(rsclp, i);
    rcu_segcblist_set_seglen(rsclp, i, 0);

instead of the big addition above. That way, if a new index ever gets 
added/renamed
to the segcblist, we'll take it into account. Also that spares a few lines.

Thanks.

Reply via email to