On Mon, Nov 12, 2018 at 12:00:48PM +0900, Masami Hiramatsu wrote:
> On Sun, 11 Nov 2018 11:43:49 -0800
> "Paul E. McKenney" <paul...@linux.ibm.com> wrote:
> 
> > Now that synchronize_rcu() waits for preempt-disable regions of code
> > as well as RCU read-side critical sections, synchronize_sched() can be
> > replaced by synchronize_rcu().  This commit therefore makes this change.
> 
> Would you mean synchronize_rcu() can ensure that any interrupt handler
> (which should run under preempt-disable state) run out (even on non-preemptive
> kernel)?

Yes, but only as of this merge window.  See this commit:

3e3100989869 ("rcu: Defer reporting RCU-preempt quiescent states when disabled")

Don't try this in v4.19 or earlier, but v4.20 and later is OK.  ;-)

                                                        Thanx, Paul

> If so, I agree with these changes.
> 
> Thank you,
> 
> > 
> > Signed-off-by: Paul E. McKenney <paul...@linux.ibm.com>
> > Cc: "Naveen N. Rao" <naveen.n....@linux.ibm.com>
> > Cc: Anil S Keshavamurthy <anil.s.keshavamur...@intel.com>
> > Cc: "David S. Miller" <da...@davemloft.net>
> > Cc: Masami Hiramatsu <mhira...@kernel.org>
> > ---
> >  kernel/kprobes.c | 10 +++++-----
> >  1 file changed, 5 insertions(+), 5 deletions(-)
> > 
> > diff --git a/kernel/kprobes.c b/kernel/kprobes.c
> > index 90e98e233647..08e31d863191 100644
> > --- a/kernel/kprobes.c
> > +++ b/kernel/kprobes.c
> > @@ -229,7 +229,7 @@ static int collect_garbage_slots(struct 
> > kprobe_insn_cache *c)
> >     struct kprobe_insn_page *kip, *next;
> >  
> >     /* Ensure no-one is interrupted on the garbages */
> > -   synchronize_sched();
> > +   synchronize_rcu();
> >  
> >     list_for_each_entry_safe(kip, next, &c->pages, list) {
> >             int i;
> > @@ -1382,7 +1382,7 @@ static int register_aggr_kprobe(struct kprobe 
> > *orig_p, struct kprobe *p)
> >                     if (ret) {
> >                             ap->flags |= KPROBE_FLAG_DISABLED;
> >                             list_del_rcu(&p->list);
> > -                           synchronize_sched();
> > +                           synchronize_rcu();
> >                     }
> >             }
> >     }
> > @@ -1597,7 +1597,7 @@ int register_kprobe(struct kprobe *p)
> >             ret = arm_kprobe(p);
> >             if (ret) {
> >                     hlist_del_rcu(&p->hlist);
> > -                   synchronize_sched();
> > +                   synchronize_rcu();
> >                     goto out;
> >             }
> >     }
> > @@ -1776,7 +1776,7 @@ void unregister_kprobes(struct kprobe **kps, int num)
> >                     kps[i]->addr = NULL;
> >     mutex_unlock(&kprobe_mutex);
> >  
> > -   synchronize_sched();
> > +   synchronize_rcu();
> >     for (i = 0; i < num; i++)
> >             if (kps[i]->addr)
> >                     __unregister_kprobe_bottom(kps[i]);
> > @@ -1966,7 +1966,7 @@ void unregister_kretprobes(struct kretprobe **rps, 
> > int num)
> >                     rps[i]->kp.addr = NULL;
> >     mutex_unlock(&kprobe_mutex);
> >  
> > -   synchronize_sched();
> > +   synchronize_rcu();
> >     for (i = 0; i < num; i++) {
> >             if (rps[i]->kp.addr) {
> >                     __unregister_kprobe_bottom(&rps[i]->kp);
> > -- 
> > 2.17.1
> > 
> 
> 
> -- 
> Masami Hiramatsu <mhira...@kernel.org>
> 

Reply via email to