On Thu, Aug 14, 2014 at 05:42:06PM -0400, Pranith Kumar wrote:
> On Mon, Aug 11, 2014 at 6:48 PM, Paul E. McKenney
> <paul...@linux.vnet.ibm.com> wrote:
> > From: "Paul E. McKenney" <paul...@linux.vnet.ibm.com>
> >
> > The current RCU-tasks implementation uses strict polling to detect
> > callback arrivals.  This works quite well, but is not so good for
> > energy efficiency.  This commit therefore replaces the strict polling
> > with a wait queue.
> >
> > Signed-off-by: Paul E. McKenney <paul...@linux.vnet.ibm.com>
> > ---
> >  kernel/rcu/update.c | 14 ++++++++++++--
> >  1 file changed, 12 insertions(+), 2 deletions(-)
> >
> > diff --git a/kernel/rcu/update.c b/kernel/rcu/update.c
> > index f1535404a79e..1256a900cd01 100644
> > --- a/kernel/rcu/update.c
> > +++ b/kernel/rcu/update.c
> > @@ -368,6 +368,7 @@ early_initcall(check_cpu_stall_init);
> >  /* Global list of callbacks and associated lock. */
> >  static struct rcu_head *rcu_tasks_cbs_head;
> >  static struct rcu_head **rcu_tasks_cbs_tail = &rcu_tasks_cbs_head;
> > +static DECLARE_WAIT_QUEUE_HEAD(rcu_tasks_cbs_wq);
> >  static DEFINE_RAW_SPINLOCK(rcu_tasks_cbs_lock);
> >
> >  /* Track exiting tasks in order to allow them to be waited for. */
> > @@ -381,13 +382,17 @@ module_param(rcu_task_stall_timeout, int, 0644);
> >  void call_rcu_tasks(struct rcu_head *rhp, void (*func)(struct rcu_head 
> > *rhp))
> >  {
> >         unsigned long flags;
> > +       bool needwake;
> >
> >         rhp->next = NULL;
> >         rhp->func = func;
> >         raw_spin_lock_irqsave(&rcu_tasks_cbs_lock, flags);
> > +       needwake = !rcu_tasks_cbs_head;
> >         *rcu_tasks_cbs_tail = rhp;
> >         rcu_tasks_cbs_tail = &rhp->next;
> >         raw_spin_unlock_irqrestore(&rcu_tasks_cbs_lock, flags);
> > +       if (needwake)
> > +               wake_up(&rcu_tasks_cbs_wq);
> >  }
> >  EXPORT_SYMBOL_GPL(call_rcu_tasks);
> 
> I think you want
> 
> needwake = !!rcu_tasks_cbs_head;
> 
> otherwise it will wake up when rcu_tasks_cbs_head is null, no?

Well, that is exactly what we want.  Note that we do the test -before-
the enqueue.  This means that we do the wakeup if the list -was-
empty before the enqueue, which is exactly the case where the task
might be asleep without having already been sent a wakeup.

Assuming that wakeups are reliably delivered, of course.  But if they
are not reliably delivered, that is a bug that needs to be fixed.

                                                        Thanx, Paul

> > @@ -498,8 +503,12 @@ static int __noreturn rcu_tasks_kthread(void *arg)
> >
> >                 /* If there were none, wait a bit and start over. */
> >                 if (!list) {
> > -                       schedule_timeout_interruptible(HZ);
> > -                       WARN_ON(signal_pending(current));
> > +                       wait_event_interruptible(rcu_tasks_cbs_wq,
> > +                                                rcu_tasks_cbs_head);
> > +                       if (!rcu_tasks_cbs_head) {
> > +                               WARN_ON(signal_pending(current));
> > +                               schedule_timeout_interruptible(HZ/10);
> > +                       }
> >                         continue;
> >                 }
> >
> > @@ -605,6 +614,7 @@ static int __noreturn rcu_tasks_kthread(void *arg)
> >                         list = next;
> >                         cond_resched();
> >                 }
> > +               schedule_timeout_uninterruptible(HZ/10);
> >         }
> >  }
> >
> > --
> > 1.8.1.5
> >
> 
> 
> 
> -- 
> Pranith
> 

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to