On Sun, Aug 03, 2014 at 02:57:58PM +0200, Oleg Nesterov wrote:
> On 08/02, Paul E. McKenney wrote:
> >
> > On Fri, Aug 01, 2014 at 08:40:59PM +0200, Oleg Nesterov wrote:
> > > On 08/01, Paul E. McKenney wrote:
> > > >
> > > > On Fri, Aug 01, 2014 at 04:11:44PM +0200, Oleg Nesterov wrote:
> > > > > Not sure this makes any sense, but perhaps we can check for the new
> > > > > callbacks and start the next gp. IOW, the main loop roughly does
> > > > >
> > > > >       for (;;) {
> > > > >               list = rcu_tasks_cbs_head;
> > > > >               rcu_tasks_cbs_head = NULL;
> > > > >
> > > > >               if (!list)
> > > > >                       sleep();
> > > > >
> > > > >               synchronize_sched();
> > > > >
> > > > >               wait_for_rcu_tasks_holdout();
> > > > >
> > > > >               synchronize_sched();
> > > > >
> > > > >               process_callbacks(list);
> > > > >       }
> > > > >
> > > > > we can "join" 2 synchronize_sched's and do
> > > > >
> > > > >       ready_list = NULL;
> > > > >       for (;;) {
> > > > >               list = rcu_tasks_cbs_head;
> > > > >               rcu_tasks_cbs_head = NULL;
> > > > >
> > > > >               if (!list && !ready_list)
> > > > >                       sleep();
> > > > >
> > > > >               synchronize_sched();
> > > > >
> > > > >               if (ready_list) {
> > > > >                       process_callbacks(ready_list);
> > > > >                       ready_list = NULL;
> > > > >               }
> > > > >
> > > > >               if (!list)
> > > > >                       continue;
> > > > >
> > > > >               wait_for_rcu_tasks_holdout();
> > > > >               ready_list = list;
> > > > >       }
> > > >
> > > > The lack of barriers for the updates I am checking mean that I really
> > > > do need a synchronize_sched() on either side of the grace-period wait.
> > >
> > > Yes,
> > >
> > > > The grace period needs to guarantee that anything that happened on any
> > > > CPU before the start of the grace period happens before anything that
> > > > happens on any CPU after the end of the grace period.  If I leave off
> > > > either synchronize_sched(), we lose this guarantee.
> > >
> > > But the 2nd variant still has synchronize_sched() on both sides?
> >
> > Your second variant above?  Unless it is in wait_for_rcu_tasks_holdouts(),
> > I am not seeing it.
> 
> I guess I probably misunderstood you from the very beginning. And now I am
> curious what exactly I missed...
> 
> The code above doesn't do process_callbacks() after 
> wait_for_rcu_tasks_holdout(),
> it does this only after another synchronize_sched(). The only difference is 
> that
> we dequeue the next generation of the pending rcu_tasks_cbs_head callbacks.
> 
> IOW. Lets look at the current code. Suppose that synchronize_rcu_tasks() is
> called when rcu_tasks_kthread() sleeps in wait_for_rcu_tasks_holdout(). In
> this case the new wakeme_after_rcu callback will sit in rcu_tasks_cbs_head
> until rcu_tasks_kthread() does the 2nd synchronize_sched() + 
> process_callbacks().
> Only after that it will be dequeued and rcu_tasks_kthread() will start 
> another gp.
> 
> This means that we have 3 synchronize_sched()'s before synchronize_rcu_tasks()
> returns.
> 
> Do we really need this? With the 2nd variant the new callback will be dequeud
> right after wait_for_rcu_tasks_holdout(), and we only have 2 necessary
> synchronize_sched()'s around wait_for_rcu_tasks_holdout().
> 
> But it seems that I missed something else. Could you please spell?

You missed nothing.  I missed the fact that you rolled the loop, using
ready_list and list.

If I understand correctly, your goal is to remove a synchronize_sched()
worth of latency from the overall RCU-tasks callback latency.  Or am I
still confused?

                                                        Thanx, Paul

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to