Re: [PATCH v2 tip/core/rcu 01/10] rcu: Add call_rcu_tasks()

2014-08-01 Thread Oleg Nesterov
On 08/01, Paul E. McKenney wrote:
>
> On Fri, Aug 01, 2014 at 05:53:10PM +0200, Oleg Nesterov wrote:
> > On 07/30, Paul E. McKenney wrote:
> > >
> > > + rcu_read_lock();
> > > + for_each_process_thread(g, t) {
> > > + if (t != current && ACCESS_ONCE(t->on_rq) &&
> > > + !is_idle_task(t)) {
> > > + t->rcu_tasks_nvcsw = ACCESS_ONCE(t->nvcsw);
> > > + t->rcu_tasks_holdout = 1;
> > > + list_add(>rcu_tasks_holdout_list,
> > > +  _tasks_holdouts);
> > > + }
> > > + }
> > > + rcu_read_unlock();
> >
> > Wait, unless I missed something this can't work...
> >
> > The problem is, once the exiting task passes exit_notify() it can
> > be removed from rcu lists.
> >
> > Now suppose that (say) proc_exit_connector() has a probe, and this
> > task has jumped into trampoline and it was preempted there.
> >
> > No?
>
> OK, this sounds to me like another vote for get_task_struct() as used
> in the v3 series.  Or am I missing something?

Ah. Sorry for confusion, I replied to the wrong email, let me resend...

Oleg.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v2 tip/core/rcu 01/10] rcu: Add call_rcu_tasks()

2014-08-01 Thread Paul E. McKenney
On Fri, Aug 01, 2014 at 05:53:10PM +0200, Oleg Nesterov wrote:
> On 07/30, Paul E. McKenney wrote:
> >
> > +   rcu_read_lock();
> > +   for_each_process_thread(g, t) {
> > +   if (t != current && ACCESS_ONCE(t->on_rq) &&
> > +   !is_idle_task(t)) {
> > +   t->rcu_tasks_nvcsw = ACCESS_ONCE(t->nvcsw);
> > +   t->rcu_tasks_holdout = 1;
> > +   list_add(>rcu_tasks_holdout_list,
> > +_tasks_holdouts);
> > +   }
> > +   }
> > +   rcu_read_unlock();
> 
> Wait, unless I missed something this can't work...
> 
> The problem is, once the exiting task passes exit_notify() it can
> be removed from rcu lists.
> 
> Now suppose that (say) proc_exit_connector() has a probe, and this
> task has jumped into trampoline and it was preempted there.
> 
> No?

OK, this sounds to me like another vote for get_task_struct() as used
in the v3 series.  Or am I missing something?

Thanx, Paul

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v2 tip/core/rcu 01/10] rcu: Add call_rcu_tasks()

2014-08-01 Thread Oleg Nesterov
On 07/30, Paul E. McKenney wrote:
>
> + rcu_read_lock();
> + for_each_process_thread(g, t) {
> + if (t != current && ACCESS_ONCE(t->on_rq) &&
> + !is_idle_task(t)) {
> + t->rcu_tasks_nvcsw = ACCESS_ONCE(t->nvcsw);
> + t->rcu_tasks_holdout = 1;
> + list_add(>rcu_tasks_holdout_list,
> +  _tasks_holdouts);
> + }
> + }
> + rcu_read_unlock();

Wait, unless I missed something this can't work...

The problem is, once the exiting task passes exit_notify() it can
be removed from rcu lists.

Now suppose that (say) proc_exit_connector() has a probe, and this
task has jumped into trampoline and it was preempted there.

No?

Oleg.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v2 tip/core/rcu 01/10] rcu: Add call_rcu_tasks()

2014-08-01 Thread Oleg Nesterov
On 07/30, Paul E. McKenney wrote:

 + rcu_read_lock();
 + for_each_process_thread(g, t) {
 + if (t != current  ACCESS_ONCE(t-on_rq) 
 + !is_idle_task(t)) {
 + t-rcu_tasks_nvcsw = ACCESS_ONCE(t-nvcsw);
 + t-rcu_tasks_holdout = 1;
 + list_add(t-rcu_tasks_holdout_list,
 +  rcu_tasks_holdouts);
 + }
 + }
 + rcu_read_unlock();

Wait, unless I missed something this can't work...

The problem is, once the exiting task passes exit_notify() it can
be removed from rcu lists.

Now suppose that (say) proc_exit_connector() has a probe, and this
task has jumped into trampoline and it was preempted there.

No?

Oleg.

--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v2 tip/core/rcu 01/10] rcu: Add call_rcu_tasks()

2014-08-01 Thread Paul E. McKenney
On Fri, Aug 01, 2014 at 05:53:10PM +0200, Oleg Nesterov wrote:
 On 07/30, Paul E. McKenney wrote:
 
  +   rcu_read_lock();
  +   for_each_process_thread(g, t) {
  +   if (t != current  ACCESS_ONCE(t-on_rq) 
  +   !is_idle_task(t)) {
  +   t-rcu_tasks_nvcsw = ACCESS_ONCE(t-nvcsw);
  +   t-rcu_tasks_holdout = 1;
  +   list_add(t-rcu_tasks_holdout_list,
  +rcu_tasks_holdouts);
  +   }
  +   }
  +   rcu_read_unlock();
 
 Wait, unless I missed something this can't work...
 
 The problem is, once the exiting task passes exit_notify() it can
 be removed from rcu lists.
 
 Now suppose that (say) proc_exit_connector() has a probe, and this
 task has jumped into trampoline and it was preempted there.
 
 No?

OK, this sounds to me like another vote for get_task_struct() as used
in the v3 series.  Or am I missing something?

Thanx, Paul

--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v2 tip/core/rcu 01/10] rcu: Add call_rcu_tasks()

2014-08-01 Thread Oleg Nesterov
On 08/01, Paul E. McKenney wrote:

 On Fri, Aug 01, 2014 at 05:53:10PM +0200, Oleg Nesterov wrote:
  On 07/30, Paul E. McKenney wrote:
  
   + rcu_read_lock();
   + for_each_process_thread(g, t) {
   + if (t != current  ACCESS_ONCE(t-on_rq) 
   + !is_idle_task(t)) {
   + t-rcu_tasks_nvcsw = ACCESS_ONCE(t-nvcsw);
   + t-rcu_tasks_holdout = 1;
   + list_add(t-rcu_tasks_holdout_list,
   +  rcu_tasks_holdouts);
   + }
   + }
   + rcu_read_unlock();
 
  Wait, unless I missed something this can't work...
 
  The problem is, once the exiting task passes exit_notify() it can
  be removed from rcu lists.
 
  Now suppose that (say) proc_exit_connector() has a probe, and this
  task has jumped into trampoline and it was preempted there.
 
  No?

 OK, this sounds to me like another vote for get_task_struct() as used
 in the v3 series.  Or am I missing something?

Ah. Sorry for confusion, I replied to the wrong email, let me resend...

Oleg.

--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v2 tip/core/rcu 01/10] rcu: Add call_rcu_tasks()

2014-07-31 Thread Paul E. McKenney
On Fri, Aug 01, 2014 at 08:53:38AM +0800, Lai Jiangshan wrote:
> On 08/01/2014 12:09 AM, Paul E. McKenney wrote:
> 
> > 
> >>> + /*
> >>> +  * There were callbacks, so we need to wait for an
> >>> +  * RCU-tasks grace period.  Start off by scanning
> >>> +  * the task list for tasks that are not already
> >>> +  * voluntarily blocked.  Mark these tasks and make
> >>> +  * a list of them in rcu_tasks_holdouts.
> >>> +  */
> >>> + rcu_read_lock();
> >>> + for_each_process_thread(g, t) {
> >>> + if (t != current && ACCESS_ONCE(t->on_rq) &&
> >>> + !is_idle_task(t)) {
> >>
> >> What happen when the trampoline is on the idle task?
> >>
> >> I think we need to use schedule_on_each_cpu() to replace one of
> >> the synchronize_sched() in this function. (or other stuff which can
> >> cause real schedule for *ALL* online CPUs).
> > 
> > Well, that is one of the questions in the 0/10 cover letter.  If it turns
> > out to be necessary to worry about idle-task trampolines, it should be
> > possible to avoid hammering all idle CPUs in the common case.  Though maybe
> > battery-powered devices won't need RCU-tasks.
> 
> trampolines on NO_HZ idle CPU can be arbitrary long, (example, SMI happens
> inside the trampoline).  So only the real schedule on idle CPU is reliable
> to me.

You might well be right, but first let's see if Steven needs this to
work in the idle task to begin with.  If he doesn't, then there is no
point in worrying about it.  If he does, I bet I can come up with a
trick or two.  ;-)

Thanx, Paul

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v2 tip/core/rcu 01/10] rcu: Add call_rcu_tasks()

2014-07-31 Thread Lai Jiangshan
On 08/01/2014 12:09 AM, Paul E. McKenney wrote:

> 
>>> +   /*
>>> +* There were callbacks, so we need to wait for an
>>> +* RCU-tasks grace period.  Start off by scanning
>>> +* the task list for tasks that are not already
>>> +* voluntarily blocked.  Mark these tasks and make
>>> +* a list of them in rcu_tasks_holdouts.
>>> +*/
>>> +   rcu_read_lock();
>>> +   for_each_process_thread(g, t) {
>>> +   if (t != current && ACCESS_ONCE(t->on_rq) &&
>>> +   !is_idle_task(t)) {
>>
>> What happen when the trampoline is on the idle task?
>>
>> I think we need to use schedule_on_each_cpu() to replace one of
>> the synchronize_sched() in this function. (or other stuff which can
>> cause real schedule for *ALL* online CPUs).
> 
> Well, that is one of the questions in the 0/10 cover letter.  If it turns
> out to be necessary to worry about idle-task trampolines, it should be
> possible to avoid hammering all idle CPUs in the common case.  Though maybe
> battery-powered devices won't need RCU-tasks.
> 

trampolines on NO_HZ idle CPU can be arbitrary long, (example, SMI happens
inside the trampoline).  So only the real schedule on idle CPU is reliable
to me.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v2 tip/core/rcu 01/10] rcu: Add call_rcu_tasks()

2014-07-31 Thread Paul E. McKenney
On Thu, Jul 31, 2014 at 07:27:52PM +0200, Oleg Nesterov wrote:
> On 07/31, Paul E. McKenney wrote:
> >
> > On Thu, Jul 31, 2014 at 06:31:38PM +0200, Oleg Nesterov wrote:
> >
> > > But can't we avoid get_task_struct()? This can pin a lot of task_struct's.
> > > Can't we just add list_del_rcu(holdout_list) into __unhash_process() ?
> >
> > If I add the list_del_rcu() there, then I am back to a concurrent list,
> > which I would like to avoid.  Don't get me wrong, it was fun playing with
> > the list-locked stuff, but best to avoid it if we can.
> 
> OK,
> 
> > The nice thing about using get_task_struct to lock
> > them down is that -only- the task_struct itself is locked down -- the
> > task can be reaped and so on.
> 
> I understand. but otoh it would be nice to not pin this memory if the
> task was already (auto)reaped.
> 
> And afaics the number of pinned task_struct's is not bounded. In theory
> it is not even limited by, say, PID_MAX_LIMIT. A thread can exit and reap
> itself right after get_task_struct() but create another running thread
> which can be noticed by rcu_tasks_kthread() too.

Good point!  Maybe this means that I need to have rcu_struct_kthread()
be more energetic if memory runs low, perhaps via an OOM handler.
Would that help?

> > > We only need to ensure that list_add() above can't race with that 
> > > list_del(),
> > > perhaps we can tolerate lock_task_sighand() ?
> >
> > I am worried about a task that does a voluntary context switch, then exits.
> > This could results in rcu_tasks_kthread() and __unhash_process() both
> > wanting to dequeue at the same time, right?
> 
> Oh yes, I was very wrong. And we do not want to abuse tasklist_lock...
> 
> OK, let me try to read the patch first.

Not a problem, looking forward to your feedback!

Thanx, Paul

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v2 tip/core/rcu 01/10] rcu: Add call_rcu_tasks()

2014-07-31 Thread Oleg Nesterov
On 07/31, Paul E. McKenney wrote:
>
> On Thu, Jul 31, 2014 at 06:31:38PM +0200, Oleg Nesterov wrote:
>
> > But can't we avoid get_task_struct()? This can pin a lot of task_struct's.
> > Can't we just add list_del_rcu(holdout_list) into __unhash_process() ?
>
> If I add the list_del_rcu() there, then I am back to a concurrent list,
> which I would like to avoid.  Don't get me wrong, it was fun playing with
> the list-locked stuff, but best to avoid it if we can.

OK,

> The nice thing about using get_task_struct to lock
> them down is that -only- the task_struct itself is locked down -- the
> task can be reaped and so on.

I understand. but otoh it would be nice to not pin this memory if the
task was already (auto)reaped.

And afaics the number of pinned task_struct's is not bounded. In theory
it is not even limited by, say, PID_MAX_LIMIT. A thread can exit and reap
itself right after get_task_struct() but create another running thread
which can be noticed by rcu_tasks_kthread() too.

> > We only need to ensure that list_add() above can't race with that 
> > list_del(),
> > perhaps we can tolerate lock_task_sighand() ?
>
> I am worried about a task that does a voluntary context switch, then exits.
> This could results in rcu_tasks_kthread() and __unhash_process() both
> wanting to dequeue at the same time, right?

Oh yes, I was very wrong. And we do not want to abuse tasklist_lock...

OK, let me try to read the patch first.

Oleg.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v2 tip/core/rcu 01/10] rcu: Add call_rcu_tasks()

2014-07-31 Thread Paul E. McKenney
On Thu, Jul 31, 2014 at 06:31:38PM +0200, Oleg Nesterov wrote:
> Again, sorry, I didn't read the patches yet, just noticed your discussion...
> 
> On 07/31, Paul E. McKenney wrote:
> >
> > On Thu, Jul 31, 2014 at 03:30:12PM +0800, Lai Jiangshan wrote:
> >
> > > > +   t->rcu_tasks_nvcsw = 
> > > > ACCESS_ONCE(t->nvcsw);
> > > > +   t->rcu_tasks_holdout = 1;
> > > > +   list_add(>rcu_tasks_holdout_list,
> > > > +_tasks_holdouts);
> > >
> > > I think get_task_struct() is needed here to avoid the task disappears.
> >
> > Hmmm...  Let's see...
> >
> > Looks like get_task_struct() does a blind atomic increment of ->usage.
> > And put_task_struct() does an atomic_dec_and_test().  So one question
> > is "what prevents us from doing get_task_struct() after the final
> > put_task_struct() has pushed ->usage down to zero?"
> >
> > Hopefully there is a grace period in there somewhere, otherwise it will
> > be necessary to take the task-list lock, which I would like to avoid.
> >
> > Looks like the call_rcu() of delayed_put_task_struct() in release_task()
> > might be doing this.
> 
> Yes, exactly, so get_task_struct() is always fine as long as task_struct
> itself is protected by RCU.
> 
> But can't we avoid get_task_struct()? This can pin a lot of task_struct's.
> Can't we just add list_del_rcu(holdout_list) into __unhash_process() ?

If I add the list_del_rcu() there, then I am back to a concurrent list,
which I would like to avoid.  Don't get me wrong, it was fun playing with
the list-locked stuff, but best to avoid it if we can.

Also, the current implementation implicitly locks down the task_structs
via the exit path.  The nice thing about using get_task_struct to lock
them down is that -only- the task_struct itself is locked down -- the
task can be reaped and so on.  In contrast, the current exit_rcu_tasks()
approach delays the semantic exit instead of just hanging onto the
underlying task_struct a bit longer.

> We only need to ensure that list_add() above can't race with that list_del(),
> perhaps we can tolerate lock_task_sighand() ?

I am worried about a task that does a voluntary context switch, then exits.
This could results in rcu_tasks_kthread() and __unhash_process() both
wanting to dequeue at the same time, right?

Thanx, Paul

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v2 tip/core/rcu 01/10] rcu: Add call_rcu_tasks()

2014-07-31 Thread Peter Zijlstra
On Thu, Jul 31, 2014 at 09:47:39AM -0700, Paul E. McKenney wrote:

> The idea here is to avoid having the kthread and to avoid providing
> callbacks, but to instead do the work currently done by the kthread
> in a synchronous function called by the updater?

yep.

> My concern with this approach is that I bet that Steven will want
> to have multiple concurrent calls to remove trampolines, and if I
> need to support that efficiently, I end up with more-painful counter
> tricks instead of the current callbacks.
> 
> Or am I confused about what you are suggesting?

Nope dont thing you are.

Typically things like ftrace are global and so we don't end up
with concurrency like that, but who knows, we'll have to wait for steve
to get back.

> > Now, if only task_work would work for kernel threads, then we could do
> > something far nicer...
> 
> OK, I'll bite...  What is task_work?  I am guessing that you are not
> talking about struct ql4_task_data in drivers/scsi/qla4xxx/ql4_def.h.  ;-)

kernel/task_work.c include/linux/task_work.h

its a worklet ran on return to userspace or exit -- ie. safe points for
our tasks.

problem of course is that kernel threads never do the userspace part and
typically do not do the exit thing either.

But the idea was, do the task scan, add a task_work to each that need
attention, have the work call complete and then wait for the completion.
Avoids the polling entirely.

If only kernel threads..


pgpULr15zKJ0T.pgp
Description: PGP signature


Re: [PATCH v2 tip/core/rcu 01/10] rcu: Add call_rcu_tasks()

2014-07-31 Thread Paul E. McKenney
On Thu, Jul 31, 2014 at 06:20:30PM +0200, Peter Zijlstra wrote:
> On Thu, Jul 31, 2014 at 09:09:24AM -0700, Paul E. McKenney wrote:
> > Well, that is one of the questions in the 0/10 cover letter.  If it turns
> > out to be necessary to worry about idle-task trampolines, it should be
> > possible to avoid hammering all idle CPUs in the common case.  Though maybe
> > battery-powered devices won't need RCU-tasks.
> 
> So do we really need this to be a full blown RCU implementation? Could
> we not live with something like a task_barrier() function that
> guarantees the same as this rcu_task_synchronize() or somesuch?
> 
> So for rostedt's case he could patch out all trampolines, call
> task_barrier() and then free them.
> 
> We can make task_barrier() force each cpu to resched once -- which, if
> we do that after the task's have all scheduled once, could be the empty
> set.
> 
> It also pushes the cost of this machinery into the caller of
> task_barrier() and not in some random kthread (yet another one).

The idea here is to avoid having the kthread and to avoid providing
callbacks, but to instead do the work currently done by the kthread
in a synchronous function called by the updater?

My concern with this approach is that I bet that Steven will want
to have multiple concurrent calls to remove trampolines, and if I
need to support that efficiently, I end up with more-painful counter
tricks instead of the current callbacks.

Or am I confused about what you are suggesting?

> Now, if only task_work would work for kernel threads, then we could do
> something far nicer...

OK, I'll bite...  What is task_work?  I am guessing that you are not
talking about struct ql4_task_data in drivers/scsi/qla4xxx/ql4_def.h.  ;-)

Thanx, Paul

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v2 tip/core/rcu 01/10] rcu: Add call_rcu_tasks()

2014-07-31 Thread Oleg Nesterov
Again, sorry, I didn't read the patches yet, just noticed your discussion...

On 07/31, Paul E. McKenney wrote:
>
> On Thu, Jul 31, 2014 at 03:30:12PM +0800, Lai Jiangshan wrote:
>
> > > + t->rcu_tasks_nvcsw = ACCESS_ONCE(t->nvcsw);
> > > + t->rcu_tasks_holdout = 1;
> > > + list_add(>rcu_tasks_holdout_list,
> > > +  _tasks_holdouts);
> >
> > I think get_task_struct() is needed here to avoid the task disappears.
>
> Hmmm...  Let's see...
>
> Looks like get_task_struct() does a blind atomic increment of ->usage.
> And put_task_struct() does an atomic_dec_and_test().  So one question
> is "what prevents us from doing get_task_struct() after the final
> put_task_struct() has pushed ->usage down to zero?"
>
> Hopefully there is a grace period in there somewhere, otherwise it will
> be necessary to take the task-list lock, which I would like to avoid.
>
> Looks like the call_rcu() of delayed_put_task_struct() in release_task()
> might be doing this.

Yes, exactly, so get_task_struct() is always fine as long as task_struct
itself is protected by RCU.

But can't we avoid get_task_struct()? This can pin a lot of task_struct's.
Can't we just add list_del_rcu(holdout_list) into __unhash_process() ?

We only need to ensure that list_add() above can't race with that list_del(),
perhaps we can tolerate lock_task_sighand() ?

Oleg.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v2 tip/core/rcu 01/10] rcu: Add call_rcu_tasks()

2014-07-31 Thread Peter Zijlstra
On Thu, Jul 31, 2014 at 09:09:24AM -0700, Paul E. McKenney wrote:
> Well, that is one of the questions in the 0/10 cover letter.  If it turns
> out to be necessary to worry about idle-task trampolines, it should be
> possible to avoid hammering all idle CPUs in the common case.  Though maybe
> battery-powered devices won't need RCU-tasks.

So do we really need this to be a full blown RCU implementation? Could
we not live with something like a task_barrier() function that
guarantees the same as this rcu_task_synchronize() or somesuch?

So for rostedt's case he could patch out all trampolines, call
task_barrier() and then free them.

We can make task_barrier() force each cpu to resched once -- which, if
we do that after the task's have all scheduled once, could be the empty
set.

It also pushes the cost of this machinery into the caller of
task_barrier() and not in some random kthread (yet another one).

Now, if only task_work would work for kernel threads, then we could do
something far nicer...


pgpHJ9e0wpObV.pgp
Description: PGP signature


Re: [PATCH v2 tip/core/rcu 01/10] rcu: Add call_rcu_tasks()

2014-07-31 Thread Paul E. McKenney
On Thu, Jul 31, 2014 at 03:30:12PM +0800, Lai Jiangshan wrote:
> On 07/31/2014 08:39 AM, Paul E. McKenney wrote:
> > From: "Paul E. McKenney" 
> > 
> > This commit adds a new RCU-tasks flavor of RCU, which provides
> > call_rcu_tasks().  This RCU flavor's quiescent states are voluntary
> > context switch (not preemption!), userspace execution, and the idle loop.
> > Note that unlike other RCU flavors, these quiescent states occur in tasks,
> > not necessarily CPUs.  Includes fixes from Steven Rostedt.
> > 
> > This RCU flavor is assumed to have very infrequent latency-tolerate
> > updaters.  This assumption permits significant simplifications, including
> > a single global callback list protected by a single global lock, along
> > with a single linked list containing all tasks that have not yet passed
> > through a quiescent state.  If experience shows this assumption to be
> > incorrect, the required additional complexity will be added.
> > 
> > Suggested-by: Steven Rostedt 
> > Signed-off-by: Paul E. McKenney 
> > ---
> >  include/linux/init_task.h |   9 +++
> >  include/linux/rcupdate.h  |  36 ++
> >  include/linux/sched.h |  23 +++
> >  init/Kconfig  |  10 +++
> >  kernel/rcu/tiny.c |   2 +
> >  kernel/rcu/tree.c |   2 +
> >  kernel/rcu/update.c   | 164 
> > ++
> >  7 files changed, 235 insertions(+), 11 deletions(-)
> > 
> > diff --git a/include/linux/init_task.h b/include/linux/init_task.h
> > index 6df7f9fe0d01..78715ea7c30c 100644
> > --- a/include/linux/init_task.h
> > +++ b/include/linux/init_task.h
> > @@ -124,6 +124,14 @@ extern struct group_info init_groups;
> >  #else
> >  #define INIT_TASK_RCU_PREEMPT(tsk)
> >  #endif
> > +#ifdef CONFIG_TASKS_RCU
> > +#define INIT_TASK_RCU_TASKS(tsk)   \
> > +   .rcu_tasks_holdout = false, \
> > +   .rcu_tasks_holdout_list =   \
> > +   LIST_HEAD_INIT(tsk.rcu_tasks_holdout_list),
> > +#else
> > +#define INIT_TASK_RCU_TASKS(tsk)
> > +#endif
> >  
> >  extern struct cred init_cred;
> >  
> > @@ -231,6 +239,7 @@ extern struct task_group root_task_group;
> > INIT_FTRACE_GRAPH   \
> > INIT_TRACE_RECURSION\
> > INIT_TASK_RCU_PREEMPT(tsk)  \
> > +   INIT_TASK_RCU_TASKS(tsk)\
> > INIT_CPUSET_SEQ(tsk)\
> > INIT_RT_MUTEXES(tsk)\
> > INIT_VTIME(tsk) \
> > diff --git a/include/linux/rcupdate.h b/include/linux/rcupdate.h
> > index 6a94cc8b1ca0..05656b504295 100644
> > --- a/include/linux/rcupdate.h
> > +++ b/include/linux/rcupdate.h
> > @@ -197,6 +197,26 @@ void call_rcu_sched(struct rcu_head *head,
> >  
> >  void synchronize_sched(void);
> >  
> > +/**
> > + * call_rcu_tasks() - Queue an RCU for invocation task-based grace period
> > + * @head: structure to be used for queueing the RCU updates.
> > + * @func: actual callback function to be invoked after the grace period
> > + *
> > + * The callback function will be invoked some time after a full grace
> > + * period elapses, in other words after all currently executing RCU
> > + * read-side critical sections have completed. call_rcu_tasks() assumes
> > + * that the read-side critical sections end at a voluntary context
> > + * switch (not a preemption!), entry into idle, or transition to usermode
> > + * execution.  As such, there are no read-side primitives analogous to
> > + * rcu_read_lock() and rcu_read_unlock() because this primitive is intended
> > + * to determine that all tasks have passed through a safe state, not so
> > + * much for data-strcuture synchronization.
> > + *
> > + * See the description of call_rcu() for more detailed information on
> > + * memory ordering guarantees.
> > + */
> > +void call_rcu_tasks(struct rcu_head *head, void (*func)(struct rcu_head 
> > *head));
> > +
> >  #ifdef CONFIG_PREEMPT_RCU
> >  
> >  void __rcu_read_lock(void);
> > @@ -294,6 +314,22 @@ static inline void rcu_user_hooks_switch(struct 
> > task_struct *prev,
> > rcu_irq_exit(); \
> > } while (0)
> >  
> > +/*
> > + * Note a voluntary context switch for RCU-tasks benefit.  This is a
> > + * macro rather than an inline function to avoid #include hell.
> > + */
> > +#ifdef CONFIG_TASKS_RCU
> > +#define rcu_note_voluntary_context_switch(t) \
> > +   do { \
> > +   preempt_disable(); /* Exclude synchronize_sched(); */ \
> > +   if ((t)->rcu_tasks_holdout) \
> > +   smp_store_release(&(t)->rcu_tasks_holdout, 0); \
> > +   preempt_enable(); \
> > +   } while (0)
> > +#else /* #ifdef CONFIG_TASKS_RCU */
> > +#define rcu_note_voluntary_context_switch(t)  

Re: [PATCH v2 tip/core/rcu 01/10] rcu: Add call_rcu_tasks()

2014-07-31 Thread Lai Jiangshan
On 07/31/2014 08:39 AM, Paul E. McKenney wrote:
> From: "Paul E. McKenney" 
> 
> This commit adds a new RCU-tasks flavor of RCU, which provides
> call_rcu_tasks().  This RCU flavor's quiescent states are voluntary
> context switch (not preemption!), userspace execution, and the idle loop.
> Note that unlike other RCU flavors, these quiescent states occur in tasks,
> not necessarily CPUs.  Includes fixes from Steven Rostedt.
> 
> This RCU flavor is assumed to have very infrequent latency-tolerate
> updaters.  This assumption permits significant simplifications, including
> a single global callback list protected by a single global lock, along
> with a single linked list containing all tasks that have not yet passed
> through a quiescent state.  If experience shows this assumption to be
> incorrect, the required additional complexity will be added.
> 
> Suggested-by: Steven Rostedt 
> Signed-off-by: Paul E. McKenney 
> ---
>  include/linux/init_task.h |   9 +++
>  include/linux/rcupdate.h  |  36 ++
>  include/linux/sched.h |  23 +++
>  init/Kconfig  |  10 +++
>  kernel/rcu/tiny.c |   2 +
>  kernel/rcu/tree.c |   2 +
>  kernel/rcu/update.c   | 164 
> ++
>  7 files changed, 235 insertions(+), 11 deletions(-)
> 
> diff --git a/include/linux/init_task.h b/include/linux/init_task.h
> index 6df7f9fe0d01..78715ea7c30c 100644
> --- a/include/linux/init_task.h
> +++ b/include/linux/init_task.h
> @@ -124,6 +124,14 @@ extern struct group_info init_groups;
>  #else
>  #define INIT_TASK_RCU_PREEMPT(tsk)
>  #endif
> +#ifdef CONFIG_TASKS_RCU
> +#define INIT_TASK_RCU_TASKS(tsk) \
> + .rcu_tasks_holdout = false, \
> + .rcu_tasks_holdout_list =   \
> + LIST_HEAD_INIT(tsk.rcu_tasks_holdout_list),
> +#else
> +#define INIT_TASK_RCU_TASKS(tsk)
> +#endif
>  
>  extern struct cred init_cred;
>  
> @@ -231,6 +239,7 @@ extern struct task_group root_task_group;
>   INIT_FTRACE_GRAPH   \
>   INIT_TRACE_RECURSION\
>   INIT_TASK_RCU_PREEMPT(tsk)  \
> + INIT_TASK_RCU_TASKS(tsk)\
>   INIT_CPUSET_SEQ(tsk)\
>   INIT_RT_MUTEXES(tsk)\
>   INIT_VTIME(tsk) \
> diff --git a/include/linux/rcupdate.h b/include/linux/rcupdate.h
> index 6a94cc8b1ca0..05656b504295 100644
> --- a/include/linux/rcupdate.h
> +++ b/include/linux/rcupdate.h
> @@ -197,6 +197,26 @@ void call_rcu_sched(struct rcu_head *head,
>  
>  void synchronize_sched(void);
>  
> +/**
> + * call_rcu_tasks() - Queue an RCU for invocation task-based grace period
> + * @head: structure to be used for queueing the RCU updates.
> + * @func: actual callback function to be invoked after the grace period
> + *
> + * The callback function will be invoked some time after a full grace
> + * period elapses, in other words after all currently executing RCU
> + * read-side critical sections have completed. call_rcu_tasks() assumes
> + * that the read-side critical sections end at a voluntary context
> + * switch (not a preemption!), entry into idle, or transition to usermode
> + * execution.  As such, there are no read-side primitives analogous to
> + * rcu_read_lock() and rcu_read_unlock() because this primitive is intended
> + * to determine that all tasks have passed through a safe state, not so
> + * much for data-strcuture synchronization.
> + *
> + * See the description of call_rcu() for more detailed information on
> + * memory ordering guarantees.
> + */
> +void call_rcu_tasks(struct rcu_head *head, void (*func)(struct rcu_head 
> *head));
> +
>  #ifdef CONFIG_PREEMPT_RCU
>  
>  void __rcu_read_lock(void);
> @@ -294,6 +314,22 @@ static inline void rcu_user_hooks_switch(struct 
> task_struct *prev,
>   rcu_irq_exit(); \
>   } while (0)
>  
> +/*
> + * Note a voluntary context switch for RCU-tasks benefit.  This is a
> + * macro rather than an inline function to avoid #include hell.
> + */
> +#ifdef CONFIG_TASKS_RCU
> +#define rcu_note_voluntary_context_switch(t) \
> + do { \
> + preempt_disable(); /* Exclude synchronize_sched(); */ \
> + if ((t)->rcu_tasks_holdout) \
> + smp_store_release(&(t)->rcu_tasks_holdout, 0); \
> + preempt_enable(); \
> + } while (0)
> +#else /* #ifdef CONFIG_TASKS_RCU */
> +#define rcu_note_voluntary_context_switch(t) do { } while (0)
> +#endif /* #else #ifdef CONFIG_TASKS_RCU */
> +
>  #if defined(CONFIG_DEBUG_LOCK_ALLOC) || defined(CONFIG_RCU_TRACE) || 
> defined(CONFIG_SMP)
>  bool __rcu_is_watching(void);
>  #endif /* #if 

Re: [PATCH v2 tip/core/rcu 01/10] rcu: Add call_rcu_tasks()

2014-07-31 Thread Lai Jiangshan
On 07/31/2014 08:39 AM, Paul E. McKenney wrote:
 From: Paul E. McKenney paul...@linux.vnet.ibm.com
 
 This commit adds a new RCU-tasks flavor of RCU, which provides
 call_rcu_tasks().  This RCU flavor's quiescent states are voluntary
 context switch (not preemption!), userspace execution, and the idle loop.
 Note that unlike other RCU flavors, these quiescent states occur in tasks,
 not necessarily CPUs.  Includes fixes from Steven Rostedt.
 
 This RCU flavor is assumed to have very infrequent latency-tolerate
 updaters.  This assumption permits significant simplifications, including
 a single global callback list protected by a single global lock, along
 with a single linked list containing all tasks that have not yet passed
 through a quiescent state.  If experience shows this assumption to be
 incorrect, the required additional complexity will be added.
 
 Suggested-by: Steven Rostedt rost...@goodmis.org
 Signed-off-by: Paul E. McKenney paul...@linux.vnet.ibm.com
 ---
  include/linux/init_task.h |   9 +++
  include/linux/rcupdate.h  |  36 ++
  include/linux/sched.h |  23 +++
  init/Kconfig  |  10 +++
  kernel/rcu/tiny.c |   2 +
  kernel/rcu/tree.c |   2 +
  kernel/rcu/update.c   | 164 
 ++
  7 files changed, 235 insertions(+), 11 deletions(-)
 
 diff --git a/include/linux/init_task.h b/include/linux/init_task.h
 index 6df7f9fe0d01..78715ea7c30c 100644
 --- a/include/linux/init_task.h
 +++ b/include/linux/init_task.h
 @@ -124,6 +124,14 @@ extern struct group_info init_groups;
  #else
  #define INIT_TASK_RCU_PREEMPT(tsk)
  #endif
 +#ifdef CONFIG_TASKS_RCU
 +#define INIT_TASK_RCU_TASKS(tsk) \
 + .rcu_tasks_holdout = false, \
 + .rcu_tasks_holdout_list =   \
 + LIST_HEAD_INIT(tsk.rcu_tasks_holdout_list),
 +#else
 +#define INIT_TASK_RCU_TASKS(tsk)
 +#endif
  
  extern struct cred init_cred;
  
 @@ -231,6 +239,7 @@ extern struct task_group root_task_group;
   INIT_FTRACE_GRAPH   \
   INIT_TRACE_RECURSION\
   INIT_TASK_RCU_PREEMPT(tsk)  \
 + INIT_TASK_RCU_TASKS(tsk)\
   INIT_CPUSET_SEQ(tsk)\
   INIT_RT_MUTEXES(tsk)\
   INIT_VTIME(tsk) \
 diff --git a/include/linux/rcupdate.h b/include/linux/rcupdate.h
 index 6a94cc8b1ca0..05656b504295 100644
 --- a/include/linux/rcupdate.h
 +++ b/include/linux/rcupdate.h
 @@ -197,6 +197,26 @@ void call_rcu_sched(struct rcu_head *head,
  
  void synchronize_sched(void);
  
 +/**
 + * call_rcu_tasks() - Queue an RCU for invocation task-based grace period
 + * @head: structure to be used for queueing the RCU updates.
 + * @func: actual callback function to be invoked after the grace period
 + *
 + * The callback function will be invoked some time after a full grace
 + * period elapses, in other words after all currently executing RCU
 + * read-side critical sections have completed. call_rcu_tasks() assumes
 + * that the read-side critical sections end at a voluntary context
 + * switch (not a preemption!), entry into idle, or transition to usermode
 + * execution.  As such, there are no read-side primitives analogous to
 + * rcu_read_lock() and rcu_read_unlock() because this primitive is intended
 + * to determine that all tasks have passed through a safe state, not so
 + * much for data-strcuture synchronization.
 + *
 + * See the description of call_rcu() for more detailed information on
 + * memory ordering guarantees.
 + */
 +void call_rcu_tasks(struct rcu_head *head, void (*func)(struct rcu_head 
 *head));
 +
  #ifdef CONFIG_PREEMPT_RCU
  
  void __rcu_read_lock(void);
 @@ -294,6 +314,22 @@ static inline void rcu_user_hooks_switch(struct 
 task_struct *prev,
   rcu_irq_exit(); \
   } while (0)
  
 +/*
 + * Note a voluntary context switch for RCU-tasks benefit.  This is a
 + * macro rather than an inline function to avoid #include hell.
 + */
 +#ifdef CONFIG_TASKS_RCU
 +#define rcu_note_voluntary_context_switch(t) \
 + do { \
 + preempt_disable(); /* Exclude synchronize_sched(); */ \
 + if ((t)-rcu_tasks_holdout) \
 + smp_store_release((t)-rcu_tasks_holdout, 0); \
 + preempt_enable(); \
 + } while (0)
 +#else /* #ifdef CONFIG_TASKS_RCU */
 +#define rcu_note_voluntary_context_switch(t) do { } while (0)
 +#endif /* #else #ifdef CONFIG_TASKS_RCU */
 +
  #if defined(CONFIG_DEBUG_LOCK_ALLOC) || defined(CONFIG_RCU_TRACE) || 
 defined(CONFIG_SMP)
  bool __rcu_is_watching(void);
  #endif /* #if defined(CONFIG_DEBUG_LOCK_ALLOC) || defined(CONFIG_RCU_TRACE) 
 || 

Re: [PATCH v2 tip/core/rcu 01/10] rcu: Add call_rcu_tasks()

2014-07-31 Thread Paul E. McKenney
On Thu, Jul 31, 2014 at 03:30:12PM +0800, Lai Jiangshan wrote:
 On 07/31/2014 08:39 AM, Paul E. McKenney wrote:
  From: Paul E. McKenney paul...@linux.vnet.ibm.com
  
  This commit adds a new RCU-tasks flavor of RCU, which provides
  call_rcu_tasks().  This RCU flavor's quiescent states are voluntary
  context switch (not preemption!), userspace execution, and the idle loop.
  Note that unlike other RCU flavors, these quiescent states occur in tasks,
  not necessarily CPUs.  Includes fixes from Steven Rostedt.
  
  This RCU flavor is assumed to have very infrequent latency-tolerate
  updaters.  This assumption permits significant simplifications, including
  a single global callback list protected by a single global lock, along
  with a single linked list containing all tasks that have not yet passed
  through a quiescent state.  If experience shows this assumption to be
  incorrect, the required additional complexity will be added.
  
  Suggested-by: Steven Rostedt rost...@goodmis.org
  Signed-off-by: Paul E. McKenney paul...@linux.vnet.ibm.com
  ---
   include/linux/init_task.h |   9 +++
   include/linux/rcupdate.h  |  36 ++
   include/linux/sched.h |  23 +++
   init/Kconfig  |  10 +++
   kernel/rcu/tiny.c |   2 +
   kernel/rcu/tree.c |   2 +
   kernel/rcu/update.c   | 164 
  ++
   7 files changed, 235 insertions(+), 11 deletions(-)
  
  diff --git a/include/linux/init_task.h b/include/linux/init_task.h
  index 6df7f9fe0d01..78715ea7c30c 100644
  --- a/include/linux/init_task.h
  +++ b/include/linux/init_task.h
  @@ -124,6 +124,14 @@ extern struct group_info init_groups;
   #else
   #define INIT_TASK_RCU_PREEMPT(tsk)
   #endif
  +#ifdef CONFIG_TASKS_RCU
  +#define INIT_TASK_RCU_TASKS(tsk)   \
  +   .rcu_tasks_holdout = false, \
  +   .rcu_tasks_holdout_list =   \
  +   LIST_HEAD_INIT(tsk.rcu_tasks_holdout_list),
  +#else
  +#define INIT_TASK_RCU_TASKS(tsk)
  +#endif
   
   extern struct cred init_cred;
   
  @@ -231,6 +239,7 @@ extern struct task_group root_task_group;
  INIT_FTRACE_GRAPH   \
  INIT_TRACE_RECURSION\
  INIT_TASK_RCU_PREEMPT(tsk)  \
  +   INIT_TASK_RCU_TASKS(tsk)\
  INIT_CPUSET_SEQ(tsk)\
  INIT_RT_MUTEXES(tsk)\
  INIT_VTIME(tsk) \
  diff --git a/include/linux/rcupdate.h b/include/linux/rcupdate.h
  index 6a94cc8b1ca0..05656b504295 100644
  --- a/include/linux/rcupdate.h
  +++ b/include/linux/rcupdate.h
  @@ -197,6 +197,26 @@ void call_rcu_sched(struct rcu_head *head,
   
   void synchronize_sched(void);
   
  +/**
  + * call_rcu_tasks() - Queue an RCU for invocation task-based grace period
  + * @head: structure to be used for queueing the RCU updates.
  + * @func: actual callback function to be invoked after the grace period
  + *
  + * The callback function will be invoked some time after a full grace
  + * period elapses, in other words after all currently executing RCU
  + * read-side critical sections have completed. call_rcu_tasks() assumes
  + * that the read-side critical sections end at a voluntary context
  + * switch (not a preemption!), entry into idle, or transition to usermode
  + * execution.  As such, there are no read-side primitives analogous to
  + * rcu_read_lock() and rcu_read_unlock() because this primitive is intended
  + * to determine that all tasks have passed through a safe state, not so
  + * much for data-strcuture synchronization.
  + *
  + * See the description of call_rcu() for more detailed information on
  + * memory ordering guarantees.
  + */
  +void call_rcu_tasks(struct rcu_head *head, void (*func)(struct rcu_head 
  *head));
  +
   #ifdef CONFIG_PREEMPT_RCU
   
   void __rcu_read_lock(void);
  @@ -294,6 +314,22 @@ static inline void rcu_user_hooks_switch(struct 
  task_struct *prev,
  rcu_irq_exit(); \
  } while (0)
   
  +/*
  + * Note a voluntary context switch for RCU-tasks benefit.  This is a
  + * macro rather than an inline function to avoid #include hell.
  + */
  +#ifdef CONFIG_TASKS_RCU
  +#define rcu_note_voluntary_context_switch(t) \
  +   do { \
  +   preempt_disable(); /* Exclude synchronize_sched(); */ \
  +   if ((t)-rcu_tasks_holdout) \
  +   smp_store_release((t)-rcu_tasks_holdout, 0); \
  +   preempt_enable(); \
  +   } while (0)
  +#else /* #ifdef CONFIG_TASKS_RCU */
  +#define rcu_note_voluntary_context_switch(t)   do { } while (0)
  +#endif /* #else #ifdef CONFIG_TASKS_RCU */
  +
   #if defined(CONFIG_DEBUG_LOCK_ALLOC) || defined(CONFIG_RCU_TRACE) || 
  

Re: [PATCH v2 tip/core/rcu 01/10] rcu: Add call_rcu_tasks()

2014-07-31 Thread Peter Zijlstra
On Thu, Jul 31, 2014 at 09:09:24AM -0700, Paul E. McKenney wrote:
 Well, that is one of the questions in the 0/10 cover letter.  If it turns
 out to be necessary to worry about idle-task trampolines, it should be
 possible to avoid hammering all idle CPUs in the common case.  Though maybe
 battery-powered devices won't need RCU-tasks.

So do we really need this to be a full blown RCU implementation? Could
we not live with something like a task_barrier() function that
guarantees the same as this rcu_task_synchronize() or somesuch?

So for rostedt's case he could patch out all trampolines, call
task_barrier() and then free them.

We can make task_barrier() force each cpu to resched once -- which, if
we do that after the task's have all scheduled once, could be the empty
set.

It also pushes the cost of this machinery into the caller of
task_barrier() and not in some random kthread (yet another one).

Now, if only task_work would work for kernel threads, then we could do
something far nicer...


pgpHJ9e0wpObV.pgp
Description: PGP signature


Re: [PATCH v2 tip/core/rcu 01/10] rcu: Add call_rcu_tasks()

2014-07-31 Thread Oleg Nesterov
Again, sorry, I didn't read the patches yet, just noticed your discussion...

On 07/31, Paul E. McKenney wrote:

 On Thu, Jul 31, 2014 at 03:30:12PM +0800, Lai Jiangshan wrote:

   + t-rcu_tasks_nvcsw = ACCESS_ONCE(t-nvcsw);
   + t-rcu_tasks_holdout = 1;
   + list_add(t-rcu_tasks_holdout_list,
   +  rcu_tasks_holdouts);
 
  I think get_task_struct() is needed here to avoid the task disappears.

 Hmmm...  Let's see...

 Looks like get_task_struct() does a blind atomic increment of -usage.
 And put_task_struct() does an atomic_dec_and_test().  So one question
 is what prevents us from doing get_task_struct() after the final
 put_task_struct() has pushed -usage down to zero?

 Hopefully there is a grace period in there somewhere, otherwise it will
 be necessary to take the task-list lock, which I would like to avoid.

 Looks like the call_rcu() of delayed_put_task_struct() in release_task()
 might be doing this.

Yes, exactly, so get_task_struct() is always fine as long as task_struct
itself is protected by RCU.

But can't we avoid get_task_struct()? This can pin a lot of task_struct's.
Can't we just add list_del_rcu(holdout_list) into __unhash_process() ?

We only need to ensure that list_add() above can't race with that list_del(),
perhaps we can tolerate lock_task_sighand() ?

Oleg.

--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v2 tip/core/rcu 01/10] rcu: Add call_rcu_tasks()

2014-07-31 Thread Paul E. McKenney
On Thu, Jul 31, 2014 at 06:20:30PM +0200, Peter Zijlstra wrote:
 On Thu, Jul 31, 2014 at 09:09:24AM -0700, Paul E. McKenney wrote:
  Well, that is one of the questions in the 0/10 cover letter.  If it turns
  out to be necessary to worry about idle-task trampolines, it should be
  possible to avoid hammering all idle CPUs in the common case.  Though maybe
  battery-powered devices won't need RCU-tasks.
 
 So do we really need this to be a full blown RCU implementation? Could
 we not live with something like a task_barrier() function that
 guarantees the same as this rcu_task_synchronize() or somesuch?
 
 So for rostedt's case he could patch out all trampolines, call
 task_barrier() and then free them.
 
 We can make task_barrier() force each cpu to resched once -- which, if
 we do that after the task's have all scheduled once, could be the empty
 set.
 
 It also pushes the cost of this machinery into the caller of
 task_barrier() and not in some random kthread (yet another one).

The idea here is to avoid having the kthread and to avoid providing
callbacks, but to instead do the work currently done by the kthread
in a synchronous function called by the updater?

My concern with this approach is that I bet that Steven will want
to have multiple concurrent calls to remove trampolines, and if I
need to support that efficiently, I end up with more-painful counter
tricks instead of the current callbacks.

Or am I confused about what you are suggesting?

 Now, if only task_work would work for kernel threads, then we could do
 something far nicer...

OK, I'll bite...  What is task_work?  I am guessing that you are not
talking about struct ql4_task_data in drivers/scsi/qla4xxx/ql4_def.h.  ;-)

Thanx, Paul

--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v2 tip/core/rcu 01/10] rcu: Add call_rcu_tasks()

2014-07-31 Thread Peter Zijlstra
On Thu, Jul 31, 2014 at 09:47:39AM -0700, Paul E. McKenney wrote:

 The idea here is to avoid having the kthread and to avoid providing
 callbacks, but to instead do the work currently done by the kthread
 in a synchronous function called by the updater?

yep.

 My concern with this approach is that I bet that Steven will want
 to have multiple concurrent calls to remove trampolines, and if I
 need to support that efficiently, I end up with more-painful counter
 tricks instead of the current callbacks.
 
 Or am I confused about what you are suggesting?

Nope dont thing you are.

Typically things like ftrace are global and so we don't end up
with concurrency like that, but who knows, we'll have to wait for steve
to get back.

  Now, if only task_work would work for kernel threads, then we could do
  something far nicer...
 
 OK, I'll bite...  What is task_work?  I am guessing that you are not
 talking about struct ql4_task_data in drivers/scsi/qla4xxx/ql4_def.h.  ;-)

kernel/task_work.c include/linux/task_work.h

its a worklet ran on return to userspace or exit -- ie. safe points for
our tasks.

problem of course is that kernel threads never do the userspace part and
typically do not do the exit thing either.

But the idea was, do the task scan, add a task_work to each that need
attention, have the work call complete and then wait for the completion.
Avoids the polling entirely.

If only kernel threads..


pgpULr15zKJ0T.pgp
Description: PGP signature


Re: [PATCH v2 tip/core/rcu 01/10] rcu: Add call_rcu_tasks()

2014-07-31 Thread Paul E. McKenney
On Thu, Jul 31, 2014 at 06:31:38PM +0200, Oleg Nesterov wrote:
 Again, sorry, I didn't read the patches yet, just noticed your discussion...
 
 On 07/31, Paul E. McKenney wrote:
 
  On Thu, Jul 31, 2014 at 03:30:12PM +0800, Lai Jiangshan wrote:
 
+   t-rcu_tasks_nvcsw = 
ACCESS_ONCE(t-nvcsw);
+   t-rcu_tasks_holdout = 1;
+   list_add(t-rcu_tasks_holdout_list,
+rcu_tasks_holdouts);
  
   I think get_task_struct() is needed here to avoid the task disappears.
 
  Hmmm...  Let's see...
 
  Looks like get_task_struct() does a blind atomic increment of -usage.
  And put_task_struct() does an atomic_dec_and_test().  So one question
  is what prevents us from doing get_task_struct() after the final
  put_task_struct() has pushed -usage down to zero?
 
  Hopefully there is a grace period in there somewhere, otherwise it will
  be necessary to take the task-list lock, which I would like to avoid.
 
  Looks like the call_rcu() of delayed_put_task_struct() in release_task()
  might be doing this.
 
 Yes, exactly, so get_task_struct() is always fine as long as task_struct
 itself is protected by RCU.
 
 But can't we avoid get_task_struct()? This can pin a lot of task_struct's.
 Can't we just add list_del_rcu(holdout_list) into __unhash_process() ?

If I add the list_del_rcu() there, then I am back to a concurrent list,
which I would like to avoid.  Don't get me wrong, it was fun playing with
the list-locked stuff, but best to avoid it if we can.

Also, the current implementation implicitly locks down the task_structs
via the exit path.  The nice thing about using get_task_struct to lock
them down is that -only- the task_struct itself is locked down -- the
task can be reaped and so on.  In contrast, the current exit_rcu_tasks()
approach delays the semantic exit instead of just hanging onto the
underlying task_struct a bit longer.

 We only need to ensure that list_add() above can't race with that list_del(),
 perhaps we can tolerate lock_task_sighand() ?

I am worried about a task that does a voluntary context switch, then exits.
This could results in rcu_tasks_kthread() and __unhash_process() both
wanting to dequeue at the same time, right?

Thanx, Paul

--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v2 tip/core/rcu 01/10] rcu: Add call_rcu_tasks()

2014-07-31 Thread Oleg Nesterov
On 07/31, Paul E. McKenney wrote:

 On Thu, Jul 31, 2014 at 06:31:38PM +0200, Oleg Nesterov wrote:

  But can't we avoid get_task_struct()? This can pin a lot of task_struct's.
  Can't we just add list_del_rcu(holdout_list) into __unhash_process() ?

 If I add the list_del_rcu() there, then I am back to a concurrent list,
 which I would like to avoid.  Don't get me wrong, it was fun playing with
 the list-locked stuff, but best to avoid it if we can.

OK,

 The nice thing about using get_task_struct to lock
 them down is that -only- the task_struct itself is locked down -- the
 task can be reaped and so on.

I understand. but otoh it would be nice to not pin this memory if the
task was already (auto)reaped.

And afaics the number of pinned task_struct's is not bounded. In theory
it is not even limited by, say, PID_MAX_LIMIT. A thread can exit and reap
itself right after get_task_struct() but create another running thread
which can be noticed by rcu_tasks_kthread() too.

  We only need to ensure that list_add() above can't race with that 
  list_del(),
  perhaps we can tolerate lock_task_sighand() ?

 I am worried about a task that does a voluntary context switch, then exits.
 This could results in rcu_tasks_kthread() and __unhash_process() both
 wanting to dequeue at the same time, right?

Oh yes, I was very wrong. And we do not want to abuse tasklist_lock...

OK, let me try to read the patch first.

Oleg.

--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v2 tip/core/rcu 01/10] rcu: Add call_rcu_tasks()

2014-07-31 Thread Paul E. McKenney
On Thu, Jul 31, 2014 at 07:27:52PM +0200, Oleg Nesterov wrote:
 On 07/31, Paul E. McKenney wrote:
 
  On Thu, Jul 31, 2014 at 06:31:38PM +0200, Oleg Nesterov wrote:
 
   But can't we avoid get_task_struct()? This can pin a lot of task_struct's.
   Can't we just add list_del_rcu(holdout_list) into __unhash_process() ?
 
  If I add the list_del_rcu() there, then I am back to a concurrent list,
  which I would like to avoid.  Don't get me wrong, it was fun playing with
  the list-locked stuff, but best to avoid it if we can.
 
 OK,
 
  The nice thing about using get_task_struct to lock
  them down is that -only- the task_struct itself is locked down -- the
  task can be reaped and so on.
 
 I understand. but otoh it would be nice to not pin this memory if the
 task was already (auto)reaped.
 
 And afaics the number of pinned task_struct's is not bounded. In theory
 it is not even limited by, say, PID_MAX_LIMIT. A thread can exit and reap
 itself right after get_task_struct() but create another running thread
 which can be noticed by rcu_tasks_kthread() too.

Good point!  Maybe this means that I need to have rcu_struct_kthread()
be more energetic if memory runs low, perhaps via an OOM handler.
Would that help?

   We only need to ensure that list_add() above can't race with that 
   list_del(),
   perhaps we can tolerate lock_task_sighand() ?
 
  I am worried about a task that does a voluntary context switch, then exits.
  This could results in rcu_tasks_kthread() and __unhash_process() both
  wanting to dequeue at the same time, right?
 
 Oh yes, I was very wrong. And we do not want to abuse tasklist_lock...
 
 OK, let me try to read the patch first.

Not a problem, looking forward to your feedback!

Thanx, Paul

--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v2 tip/core/rcu 01/10] rcu: Add call_rcu_tasks()

2014-07-31 Thread Lai Jiangshan
On 08/01/2014 12:09 AM, Paul E. McKenney wrote:

 
 +   /*
 +* There were callbacks, so we need to wait for an
 +* RCU-tasks grace period.  Start off by scanning
 +* the task list for tasks that are not already
 +* voluntarily blocked.  Mark these tasks and make
 +* a list of them in rcu_tasks_holdouts.
 +*/
 +   rcu_read_lock();
 +   for_each_process_thread(g, t) {
 +   if (t != current  ACCESS_ONCE(t-on_rq) 
 +   !is_idle_task(t)) {

 What happen when the trampoline is on the idle task?

 I think we need to use schedule_on_each_cpu() to replace one of
 the synchronize_sched() in this function. (or other stuff which can
 cause real schedule for *ALL* online CPUs).
 
 Well, that is one of the questions in the 0/10 cover letter.  If it turns
 out to be necessary to worry about idle-task trampolines, it should be
 possible to avoid hammering all idle CPUs in the common case.  Though maybe
 battery-powered devices won't need RCU-tasks.
 

trampolines on NO_HZ idle CPU can be arbitrary long, (example, SMI happens
inside the trampoline).  So only the real schedule on idle CPU is reliable
to me.
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v2 tip/core/rcu 01/10] rcu: Add call_rcu_tasks()

2014-07-31 Thread Paul E. McKenney
On Fri, Aug 01, 2014 at 08:53:38AM +0800, Lai Jiangshan wrote:
 On 08/01/2014 12:09 AM, Paul E. McKenney wrote:
 
  
  + /*
  +  * There were callbacks, so we need to wait for an
  +  * RCU-tasks grace period.  Start off by scanning
  +  * the task list for tasks that are not already
  +  * voluntarily blocked.  Mark these tasks and make
  +  * a list of them in rcu_tasks_holdouts.
  +  */
  + rcu_read_lock();
  + for_each_process_thread(g, t) {
  + if (t != current  ACCESS_ONCE(t-on_rq) 
  + !is_idle_task(t)) {
 
  What happen when the trampoline is on the idle task?
 
  I think we need to use schedule_on_each_cpu() to replace one of
  the synchronize_sched() in this function. (or other stuff which can
  cause real schedule for *ALL* online CPUs).
  
  Well, that is one of the questions in the 0/10 cover letter.  If it turns
  out to be necessary to worry about idle-task trampolines, it should be
  possible to avoid hammering all idle CPUs in the common case.  Though maybe
  battery-powered devices won't need RCU-tasks.
 
 trampolines on NO_HZ idle CPU can be arbitrary long, (example, SMI happens
 inside the trampoline).  So only the real schedule on idle CPU is reliable
 to me.

You might well be right, but first let's see if Steven needs this to
work in the idle task to begin with.  If he doesn't, then there is no
point in worrying about it.  If he does, I bet I can come up with a
trick or two.  ;-)

Thanx, Paul

--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH v2 tip/core/rcu 01/10] rcu: Add call_rcu_tasks()

2014-07-30 Thread Paul E. McKenney
From: "Paul E. McKenney" 

This commit adds a new RCU-tasks flavor of RCU, which provides
call_rcu_tasks().  This RCU flavor's quiescent states are voluntary
context switch (not preemption!), userspace execution, and the idle loop.
Note that unlike other RCU flavors, these quiescent states occur in tasks,
not necessarily CPUs.  Includes fixes from Steven Rostedt.

This RCU flavor is assumed to have very infrequent latency-tolerate
updaters.  This assumption permits significant simplifications, including
a single global callback list protected by a single global lock, along
with a single linked list containing all tasks that have not yet passed
through a quiescent state.  If experience shows this assumption to be
incorrect, the required additional complexity will be added.

Suggested-by: Steven Rostedt 
Signed-off-by: Paul E. McKenney 
---
 include/linux/init_task.h |   9 +++
 include/linux/rcupdate.h  |  36 ++
 include/linux/sched.h |  23 +++
 init/Kconfig  |  10 +++
 kernel/rcu/tiny.c |   2 +
 kernel/rcu/tree.c |   2 +
 kernel/rcu/update.c   | 164 ++
 7 files changed, 235 insertions(+), 11 deletions(-)

diff --git a/include/linux/init_task.h b/include/linux/init_task.h
index 6df7f9fe0d01..78715ea7c30c 100644
--- a/include/linux/init_task.h
+++ b/include/linux/init_task.h
@@ -124,6 +124,14 @@ extern struct group_info init_groups;
 #else
 #define INIT_TASK_RCU_PREEMPT(tsk)
 #endif
+#ifdef CONFIG_TASKS_RCU
+#define INIT_TASK_RCU_TASKS(tsk)   \
+   .rcu_tasks_holdout = false, \
+   .rcu_tasks_holdout_list =   \
+   LIST_HEAD_INIT(tsk.rcu_tasks_holdout_list),
+#else
+#define INIT_TASK_RCU_TASKS(tsk)
+#endif
 
 extern struct cred init_cred;
 
@@ -231,6 +239,7 @@ extern struct task_group root_task_group;
INIT_FTRACE_GRAPH   \
INIT_TRACE_RECURSION\
INIT_TASK_RCU_PREEMPT(tsk)  \
+   INIT_TASK_RCU_TASKS(tsk)\
INIT_CPUSET_SEQ(tsk)\
INIT_RT_MUTEXES(tsk)\
INIT_VTIME(tsk) \
diff --git a/include/linux/rcupdate.h b/include/linux/rcupdate.h
index 6a94cc8b1ca0..05656b504295 100644
--- a/include/linux/rcupdate.h
+++ b/include/linux/rcupdate.h
@@ -197,6 +197,26 @@ void call_rcu_sched(struct rcu_head *head,
 
 void synchronize_sched(void);
 
+/**
+ * call_rcu_tasks() - Queue an RCU for invocation task-based grace period
+ * @head: structure to be used for queueing the RCU updates.
+ * @func: actual callback function to be invoked after the grace period
+ *
+ * The callback function will be invoked some time after a full grace
+ * period elapses, in other words after all currently executing RCU
+ * read-side critical sections have completed. call_rcu_tasks() assumes
+ * that the read-side critical sections end at a voluntary context
+ * switch (not a preemption!), entry into idle, or transition to usermode
+ * execution.  As such, there are no read-side primitives analogous to
+ * rcu_read_lock() and rcu_read_unlock() because this primitive is intended
+ * to determine that all tasks have passed through a safe state, not so
+ * much for data-strcuture synchronization.
+ *
+ * See the description of call_rcu() for more detailed information on
+ * memory ordering guarantees.
+ */
+void call_rcu_tasks(struct rcu_head *head, void (*func)(struct rcu_head 
*head));
+
 #ifdef CONFIG_PREEMPT_RCU
 
 void __rcu_read_lock(void);
@@ -294,6 +314,22 @@ static inline void rcu_user_hooks_switch(struct 
task_struct *prev,
rcu_irq_exit(); \
} while (0)
 
+/*
+ * Note a voluntary context switch for RCU-tasks benefit.  This is a
+ * macro rather than an inline function to avoid #include hell.
+ */
+#ifdef CONFIG_TASKS_RCU
+#define rcu_note_voluntary_context_switch(t) \
+   do { \
+   preempt_disable(); /* Exclude synchronize_sched(); */ \
+   if ((t)->rcu_tasks_holdout) \
+   smp_store_release(&(t)->rcu_tasks_holdout, 0); \
+   preempt_enable(); \
+   } while (0)
+#else /* #ifdef CONFIG_TASKS_RCU */
+#define rcu_note_voluntary_context_switch(t)   do { } while (0)
+#endif /* #else #ifdef CONFIG_TASKS_RCU */
+
 #if defined(CONFIG_DEBUG_LOCK_ALLOC) || defined(CONFIG_RCU_TRACE) || 
defined(CONFIG_SMP)
 bool __rcu_is_watching(void);
 #endif /* #if defined(CONFIG_DEBUG_LOCK_ALLOC) || defined(CONFIG_RCU_TRACE) || 
defined(CONFIG_SMP) */
diff --git a/include/linux/sched.h b/include/linux/sched.h
index 306f4f0c987a..3cf124389ec7 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -1273,6 

[PATCH v2 tip/core/rcu 01/10] rcu: Add call_rcu_tasks()

2014-07-30 Thread Paul E. McKenney
From: Paul E. McKenney paul...@linux.vnet.ibm.com

This commit adds a new RCU-tasks flavor of RCU, which provides
call_rcu_tasks().  This RCU flavor's quiescent states are voluntary
context switch (not preemption!), userspace execution, and the idle loop.
Note that unlike other RCU flavors, these quiescent states occur in tasks,
not necessarily CPUs.  Includes fixes from Steven Rostedt.

This RCU flavor is assumed to have very infrequent latency-tolerate
updaters.  This assumption permits significant simplifications, including
a single global callback list protected by a single global lock, along
with a single linked list containing all tasks that have not yet passed
through a quiescent state.  If experience shows this assumption to be
incorrect, the required additional complexity will be added.

Suggested-by: Steven Rostedt rost...@goodmis.org
Signed-off-by: Paul E. McKenney paul...@linux.vnet.ibm.com
---
 include/linux/init_task.h |   9 +++
 include/linux/rcupdate.h  |  36 ++
 include/linux/sched.h |  23 +++
 init/Kconfig  |  10 +++
 kernel/rcu/tiny.c |   2 +
 kernel/rcu/tree.c |   2 +
 kernel/rcu/update.c   | 164 ++
 7 files changed, 235 insertions(+), 11 deletions(-)

diff --git a/include/linux/init_task.h b/include/linux/init_task.h
index 6df7f9fe0d01..78715ea7c30c 100644
--- a/include/linux/init_task.h
+++ b/include/linux/init_task.h
@@ -124,6 +124,14 @@ extern struct group_info init_groups;
 #else
 #define INIT_TASK_RCU_PREEMPT(tsk)
 #endif
+#ifdef CONFIG_TASKS_RCU
+#define INIT_TASK_RCU_TASKS(tsk)   \
+   .rcu_tasks_holdout = false, \
+   .rcu_tasks_holdout_list =   \
+   LIST_HEAD_INIT(tsk.rcu_tasks_holdout_list),
+#else
+#define INIT_TASK_RCU_TASKS(tsk)
+#endif
 
 extern struct cred init_cred;
 
@@ -231,6 +239,7 @@ extern struct task_group root_task_group;
INIT_FTRACE_GRAPH   \
INIT_TRACE_RECURSION\
INIT_TASK_RCU_PREEMPT(tsk)  \
+   INIT_TASK_RCU_TASKS(tsk)\
INIT_CPUSET_SEQ(tsk)\
INIT_RT_MUTEXES(tsk)\
INIT_VTIME(tsk) \
diff --git a/include/linux/rcupdate.h b/include/linux/rcupdate.h
index 6a94cc8b1ca0..05656b504295 100644
--- a/include/linux/rcupdate.h
+++ b/include/linux/rcupdate.h
@@ -197,6 +197,26 @@ void call_rcu_sched(struct rcu_head *head,
 
 void synchronize_sched(void);
 
+/**
+ * call_rcu_tasks() - Queue an RCU for invocation task-based grace period
+ * @head: structure to be used for queueing the RCU updates.
+ * @func: actual callback function to be invoked after the grace period
+ *
+ * The callback function will be invoked some time after a full grace
+ * period elapses, in other words after all currently executing RCU
+ * read-side critical sections have completed. call_rcu_tasks() assumes
+ * that the read-side critical sections end at a voluntary context
+ * switch (not a preemption!), entry into idle, or transition to usermode
+ * execution.  As such, there are no read-side primitives analogous to
+ * rcu_read_lock() and rcu_read_unlock() because this primitive is intended
+ * to determine that all tasks have passed through a safe state, not so
+ * much for data-strcuture synchronization.
+ *
+ * See the description of call_rcu() for more detailed information on
+ * memory ordering guarantees.
+ */
+void call_rcu_tasks(struct rcu_head *head, void (*func)(struct rcu_head 
*head));
+
 #ifdef CONFIG_PREEMPT_RCU
 
 void __rcu_read_lock(void);
@@ -294,6 +314,22 @@ static inline void rcu_user_hooks_switch(struct 
task_struct *prev,
rcu_irq_exit(); \
} while (0)
 
+/*
+ * Note a voluntary context switch for RCU-tasks benefit.  This is a
+ * macro rather than an inline function to avoid #include hell.
+ */
+#ifdef CONFIG_TASKS_RCU
+#define rcu_note_voluntary_context_switch(t) \
+   do { \
+   preempt_disable(); /* Exclude synchronize_sched(); */ \
+   if ((t)-rcu_tasks_holdout) \
+   smp_store_release((t)-rcu_tasks_holdout, 0); \
+   preempt_enable(); \
+   } while (0)
+#else /* #ifdef CONFIG_TASKS_RCU */
+#define rcu_note_voluntary_context_switch(t)   do { } while (0)
+#endif /* #else #ifdef CONFIG_TASKS_RCU */
+
 #if defined(CONFIG_DEBUG_LOCK_ALLOC) || defined(CONFIG_RCU_TRACE) || 
defined(CONFIG_SMP)
 bool __rcu_is_watching(void);
 #endif /* #if defined(CONFIG_DEBUG_LOCK_ALLOC) || defined(CONFIG_RCU_TRACE) || 
defined(CONFIG_SMP) */
diff --git a/include/linux/sched.h b/include/linux/sched.h
index 306f4f0c987a..3cf124389ec7 100644
---