On Mon, Apr 01, 2019 at 10:38:27AM +0200, Peter Zijlstra wrote: > > + fweisbec, who did the remote bits > > On Sat, Mar 30, 2019 at 01:10:28PM +1000, Nicholas Piggin wrote: > > diff --git a/kernel/irq_work.c b/kernel/irq_work.c > > index 6b7cdf17ccf8..f0e539d0f879 100644 > > --- a/kernel/irq_work.c > > +++ b/kernel/irq_work.c > > -/* Enqueue the irq work @work on the current CPU */ > > -bool irq_work_queue(struct irq_work *work) > > +/* > > + * Enqueue the irq_work @work on @cpu unless it's already pending > > + * somewhere. > > + * > > + * Can be re-enqueued while the callback is still in progress. > > + */ > > +bool irq_work_queue_on(struct irq_work *work, int cpu) > > { > > +#ifndef CONFIG_SMP > > + return irq_work_queue(work); > > +
I'd suggest to use "if (!IS_ENABLED(CONFIG_SMP))" here to avoid the large ifdeffery. > > +#else /* #ifndef CONFIG_SMP */ > > + /* All work should have been flushed before going offline */ > > + WARN_ON_ONCE(cpu_is_offline(cpu)); > > + > > /* Only queue if not already pending */ > > if (!irq_work_claim(work)) > > return false; > > > > - /* Queue the entry and raise the IPI if needed. */ > > preempt_disable(); > > - > > - /* If the work is "lazy", handle it from next tick if any */ > > - if (work->flags & IRQ_WORK_LAZY) { > > - if (llist_add(&work->llnode, this_cpu_ptr(&lazy_list)) && > > - tick_nohz_tick_stopped()) > > - arch_irq_work_raise(); > > - } else { > > - if (llist_add(&work->llnode, this_cpu_ptr(&raised_list))) > > - arch_irq_work_raise(); > > - } > > - > > + if (cpu != smp_processor_id()) { > > + /* Arch remote IPI send/receive backend aren't NMI safe */ > > + WARN_ON_ONCE(in_nmi()); > > + if (llist_add(&work->llnode, &per_cpu(raised_list, cpu))) > > + arch_send_call_function_single_ipi(cpu); > > + } else > > + __irq_work_queue(work); Also perhaps rename __irq_work_queue() to irq_work_queue_local() to make it instantly clearer to reviewers. Other than those cosmetic changes, Reviewed-by: Frederic Weisbecker <frede...@kernel.org> Thanks.