On Mon, May 26, 2014 at 05:59:44PM +0200, Peter Zijlstra wrote:
> On Sun, May 25, 2014 at 04:29:47PM +0200, Frederic Weisbecker wrote:
> > An irq work can be handled from two places: from the tick if the work
> > carries the "lazy" flag and the tick is periodic, or from a self IPI.
> > 
> > We merge all these works in a single list and we use some per cpu latch
> > to avoid raising a self-IPI when one is already pending.
> > 
> > Now we could do away with this ugly latch if only the list was only made of
> > non-lazy works. Just enqueueing a work on the empty list would be enough
> > to know if we need to raise an IPI or not.
> > 
> > Also we are going to implement remote irq work queuing. Then the per CPU
> > latch will need to become atomic in the global scope. That's too bad
> > because, here as well, just enqueueing a work on an empty list of
> > non-lazy works would be enough to know if we need to raise an IPI or not.
> > 
> > So lets take a way out of this: split the works in two distinct lists,
> > one for the works that can be handled by the next tick and another
> > one for those handled by the IPI. Just checking if the latter is empty
> > when we queue a new work is enough to know if we need to raise an IPI.
> 
> That ^
> 
> >  bool irq_work_queue(struct irq_work *work)
> >  {
> > +   unsigned long flags;
> > +
> >     /* Only queue if not already pending */
> >     if (!irq_work_claim(work))
> >             return false;
> >  
> > -   /* Queue the entry and raise the IPI if needed. */
> > -   preempt_disable();
> > +   /* Check dynticks safely */
> > +   local_irq_save(flags);
> 
> Does not mention this ^
> 
> 'sup?

Because it's really just a technical detail.

If we enqueue before checking for tick stopped, we can avoid disabling irqs
because it's fine if we just raced with an irq in-between.

But now that we enqueue _after_, we can't afford an IRQ in between.

Should I update the comments maybe?


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to