On Tue, Dec 20, 2016 at 04:11:51PM +0800, Boqun Feng wrote:
> On Tue, Dec 20, 2016 at 01:59:14PM +0800, Boqun Feng wrote:
> > On Mon, Dec 19, 2016 at 09:09:13PM -0800, Paul E. McKenney wrote:
> > > On Mon, Dec 19, 2016 at 11:15:15PM +0800, Boqun Feng wrote:
> > > > On Thu, Dec 15, 2016 at 02:51:36PM +0000, Colin Ian King wrote:
> > > > > On 15/12/16 14:42, Boqun Feng wrote:
> > > > > > On Thu, Dec 15, 2016 at 12:04:59PM +0000, Mark Rutland wrote:
> > > > > >> On Thu, Dec 15, 2016 at 10:42:03AM +0800, Boqun Feng wrote:
> > > > > >>> ->qsmask of an RCU leaf node is usually more sparse than the
> > > > > >>> corresponding cpu_possible_mask. So replace the
> > > > > >>> for_each_leaf_node_possible_cpu() in force_qs_rnp() with
> > > > > >>> for_each_leaf_node_cpu() to save several checks.
> > > > > >>>
> > > > > >>> [Note we need to use "1UL << bit" instead of "1 << bit" to 
> > > > > >>> generate the
> > > > > >>> corresponding mask for a bit because @mask is unsigned long, this 
> > > > > >>> was
> > > > > >>> spotted by Colin Ian King <colin.k...@canonical.com> and 
> > > > > >>> CoverityScan in
> > > > > >>> a previous version of this patch.]
> > > > > >>
> > > > > >> Nit: This note can go now that we use leaf_node_cpu_bit(). ;)
> > > > > >>
> > > > > > 
> > > > > > I kinda keep this here for honoring the effort of finding out this 
> > > > > > bug
> > > > > > from Colin, but yes, it's no longer needed here for the current 
> > > > > > code.
> > > > > 
> > > > > Yep, remove it.
> > > > > 
> > > > 
> > > > Paul, here is a modified version of this patch, what I only did is
> > > > removing this note.
> > > > 
> > > > Besides I rebased the whole series on the current rcu/dev branch of -rcu
> > > > tree, on this very commit:
> > > > 
> > > >         8e9b2521b18a ("doc: Quick-Quiz answers are now inline")
> > > > 
> > > > And I put the latest version at
> > > > 
> > > > git://git.kernel.org/pub/scm/linux/kernel/git/boqun/linux.git leaf-node
> > > > 
> > > > If you thought it's better, I could send a v3 ;-)
> > > 
> > > I would feel better about this patchset if it reduced the number of lines
> > > of code rather than increasing them.  That said, part of the increase
> > > is a commment.  Still, I am not convinced that the extra level of macro
> > > is carrying its weight.
> > > 
> > > dbf18a2422e2 ("rcu: Introduce for_each_leaf_node_cpu()")
> > > 
> > >   The commit log needs a bit of wordsmithing.
> > > 
> > >   The added WARN_ON_ONCE(!cpu_possible(cpu)) still seems strange.
> > >   What is its purpose, really?  What does its triggering tell you?
> > >   What other checks did you consider as an alternative?
> > > 
> > 
> > The check is an over-case one, it's introduced because I'm worried about
> > some code outside the RCU code mis-sets the ->qsmask* or ->expmask* on
> > an "impossible" CPU. I will explanation later in more details.
> > 
> > >   And if you are going to add checks of this type, should you
> > >   also check for this being a leaf rcu_node structure?
> > > 
> > 
> > I don't think I want to check that, and I don't think check
> > cpu_possible(cpu) in the macro is similar to that.
> > 
> > > 3f0b4ba1fe94 ("rcu: Use for_each_leaf_node_cpu() in RCU stall checking")
> > > 
> > >   This does look a bit nicer, but why the added blank lines?
> > >   Are they really helping?
> > > 
> > >   The commit log seems a bit misplaced.  This code is almost never
> > >   executed (once per 21 seconds at the most), so performance really
> > >   isn't a consideration.  The simpler-looking code might be.
> > > 
> > > fd799f1ac7b7 ("rcu: Use for_each_leaf_node_cpu() in ->expmask iteration")
> > > 
> > >   Ditto on blank lines.
> > > 
> > >   Again, this code is executed per expedited grace period, so
> > >   performance really isn't a big deal.  More of a big deal than
> > >   the stall-warning code, but we still are way off of any fastpath.
> > > 
> > > 69a1baedbf42 ("rcu: Use for_each_leaf_node_cpu() in force_qs_rnp()")
> > > 
> > >   Ditto again on blank lines.
> > > 
> > >   And on the commit log.  This code is executed about once
> > >   per several jiffies, and on larger machines, per 20 jiffies
> > >   or so.  Performance really isn't a consideration.
> > > 
> > > 7b00e50e3efb ("rcu: Use for_each_leaf_node_cpu() in online CPU iteration")
> > > 
> > >   And another ditto on blank lines.
> > > 
> > >   This code executes once per CPU-hotplug operation, so again isn't
> > >   at all performance critical.
> > > 
> > > In short, if you are trying to sell this to me as a significant 
> > > performance
> > > boost, I am not buying.  The added WARN_ON_ONCE() looks quite dubious,
> > 
> > Yep, it won't help the performance a lot, but it 
> > 
> > 1)  helps the performance in theory, because it iterates less CPUs
> > 
> > 2)  makes code cleaner. By "cleaner", I mean we can a) affort more
> >     blank lines to make loops separated from other code and b)
> >     descrease the indent levels for those loops. But, yes I should
> >     add those points in the commit log, because those are more
> >     visible effects.
> > 
> > > though perhaps I am misunderstanding its purpose.  My assumption is
> > > that you want to detect missing UL suffixes on bitmask constants, in
> > > which case I bet there is a better way.
> > > 
> > 
> > The WARN_ON_ONCE() is not for detecting missing UL suffixes on bitmask
> > constatns, and we don't need to check that, because we use
> > leaf_node_cpu_id() now. As I said, this is an over-case check, and we
> > can drop if we guarante that CPUs masked in ->qsmask* and ->expmask*
> > must be a "possible" CPU, IOW, ->qsmask* and ->expmask* are the subsets
> > (with offset fixed by ->grplo) of cpu_possible_mask.
> > 
> > Hmm.. and I just check the code, the initial values of ->qsmask* and
> > ->expmask* are from ->qsmaskinitnext and ->expmaskinitnext, and the
> > latter two are set in rcu_cpu_starting() since commit
> > 
> >     7ec99de36f40 ("rcu: Provide exact CPU-online tracking for RCU")
> > 
> > , and rcu_cpu_starting() only set the corresponding bit of _this_ cpu in
> > a leaf node's ->qsmaskinitnext and ->expmaskinitnext. So looks like we
> > are safe to remove the WARN_ON_ONCE() check, because a ever-running CPU
> > must be a possible CPU, IIRC.
> > 
> > But this brings a side question, is the callsite of rcu_cpu_starting()
> > is correct? Given rcu_cpu_starting() ignores the @cpu parameter and only
> 
> By "callsite", I mean we call rcu_cpu_starting() in a
> for_each_online_cpu() loop. And that doesn't seem making sense to me,
> because rcu_cpu_starting() doesn't use its parameter @cpu. So I made the
> following untested patch to fix this.
> 
> Thoughts?

This would be a legitimate approach, except that the fast-boot guys
give me some reason for concern.  See my earlier patch substituting
this_cpu_ptr() for per_cpu_ptr().

Coming back to your original patch series, if the check in
for_each_leaf_node_cpu() is removed, the added blank lines are removed,
and we have some heavy-duty validation in place, I am inclined to accept
it.  I am more worried about validation than I might be in other cases.
This is because the main effect of this patch is aesthetics on the one
hand and because of the missing-UL issue in the first submission.

                                                        Thanx, Paul

> > set _this_ cpu's bit in a leaf node?
> > 
> 
> Regards,
> Boqun
> 
> -------------------------------->8
> From: Boqun Feng <boqun.f...@gmail.com>
> Date: Tue, 20 Dec 2016 15:10:57 +0800
> Subject: [PATCH] rcu: Rename rcu_cpu_starting() to rcu_this_cpu_starting()
> 
> rcu_cpu_starting() was introduced at commit:
> 
>       7ec99de36f40 ("rcu: Provide exact CPU-online tracking for RCU")
> 
> , and was to inform RCU core the onlining of _this_ cpu, and it was
> implemented as its purpose, which made the parameter @cpu useless.
> 
> It's better if we remove the unnecessary parameter and rename it to
> rcu_this_cpu_starting(), which fits its functionality well. Besides, in
> rcu_init(), we actually loop over all online CPUs but keep notifying
> that the boot cpu is online to RCU core, so we'd better pull the
> notification part out of the loop.
> 
> Signed-off-by: Boqun Feng <boqun.f...@gmail.com>
> ---
>  include/linux/rcupdate.h |  2 +-
>  kernel/cpu.c             |  2 +-
>  kernel/rcu/tree.c        | 17 ++++++++---------
>  3 files changed, 10 insertions(+), 11 deletions(-)
> 
> diff --git a/include/linux/rcupdate.h b/include/linux/rcupdate.h
> index 813074714a95..f23c9dafbda9 100644
> --- a/include/linux/rcupdate.h
> +++ b/include/linux/rcupdate.h
> @@ -335,7 +335,7 @@ void rcu_sched_qs(void);
>  void rcu_bh_qs(void);
>  void rcu_check_callbacks(int user);
>  void rcu_report_dead(unsigned int cpu);
> -void rcu_cpu_starting(unsigned int cpu);
> +void rcu_this_cpu_starting(void);
> 
>  #ifndef CONFIG_TINY_RCU
>  void rcu_end_inkernel_boot(void);
> diff --git a/kernel/cpu.c b/kernel/cpu.c
> index 5df20d6d1520..63778ed6b598 100644
> --- a/kernel/cpu.c
> +++ b/kernel/cpu.c
> @@ -966,7 +966,7 @@ void notify_cpu_starting(unsigned int cpu)
>       struct cpuhp_cpu_state *st = per_cpu_ptr(&cpuhp_state, cpu);
>       enum cpuhp_state target = min((int)st->target, CPUHP_AP_ONLINE);
> 
> -     rcu_cpu_starting(cpu);  /* Enables RCU usage on this CPU. */
> +     rcu_this_cpu_starting();        /* Enables RCU usage on this CPU. */
>       while (st->state < target) {
>               st->state++;
>               cpuhp_invoke_callback(cpu, st->state, true, NULL);
> diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
> index b9d3c0e30935..c5862aef7e21 100644
> --- a/kernel/rcu/tree.c
> +++ b/kernel/rcu/tree.c
> @@ -4002,13 +4002,13 @@ int rcutree_dead_cpu(unsigned int cpu)
>  }
> 
>  /*
> - * Mark the specified CPU as being online so that subsequent grace periods
> - * (both expedited and normal) will wait on it.  Note that this means that
> - * incoming CPUs are not allowed to use RCU read-side critical sections
> - * until this function is called.  Failing to observe this restriction
> - * will result in lockdep splats.
> + * Mark this CPU(CPU that is currently running this function) as being online
> + * so that subsequent grace periods (both expedited and normal) will wait on
> + * it.  Note that this means that incoming CPUs are not allowed to use RCU
> + * read-side critical sections until this function is called.  Failing to
> + * observe this restriction will result in lockdep splats.
>   */
> -void rcu_cpu_starting(unsigned int cpu)
> +void rcu_this_cpu_starting(void)
>  {
>       unsigned long flags;
>       unsigned long mask;
> @@ -4376,10 +4376,9 @@ void __init rcu_init(void)
>        * or the scheduler are operational.
>        */
>       pm_notifier(rcu_pm_notify, 0);
> -     for_each_online_cpu(cpu) {
> +     for_each_online_cpu(cpu)
>               rcutree_prepare_cpu(cpu);
> -             rcu_cpu_starting(cpu);
> -     }
> +     rcu_this_cpu_starting(); /* Start RCU on the booting CPU */
>  }
> 
>  #include "tree_exp.h"
> -- 
> 2.10.2
> 

Reply via email to