__percpu_counter_add() may be called in softirq/hardirq handler (such as, blk_mq_queue_exit() is typically called in hardirq/softirq handler), so we need to disable local irq when updating the percpu counter, otherwise counts may be lost.
The patch fixes problem that 'rmmod null_blk' may hang in blk_cleanup_queue() because of miscounting of request_queue->mq_usage_counter. Cc: Paul Gortmaker <paul.gortma...@windriver.com> Cc: Andrew Morton <a...@linux-foundation.org> Cc: Shaohua Li <s...@fusionio.com> Cc: Jens Axboe <ax...@kernel.dk> Cc: Fan Du <fan...@windriver.com> Signed-off-by: Ming Lei <tom.leim...@gmail.com> --- lib/percpu_counter.c | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/lib/percpu_counter.c b/lib/percpu_counter.c index 7473ee3..2b87bc1 100644 --- a/lib/percpu_counter.c +++ b/lib/percpu_counter.c @@ -75,19 +75,19 @@ EXPORT_SYMBOL(percpu_counter_set); void __percpu_counter_add(struct percpu_counter *fbc, s64 amount, s32 batch) { s64 count; + unsigned long flags; - preempt_disable(); + raw_local_irq_save(flags); count = __this_cpu_read(*fbc->counters) + amount; if (count >= batch || count <= -batch) { - unsigned long flags; - raw_spin_lock_irqsave(&fbc->lock, flags); + raw_spin_lock(&fbc->lock); fbc->count += count; - raw_spin_unlock_irqrestore(&fbc->lock, flags); + raw_spin_unlock(&fbc->lock); __this_cpu_write(*fbc->counters, 0); } else { __this_cpu_write(*fbc->counters, count); } - preempt_enable(); + raw_local_irq_restore(flags); } EXPORT_SYMBOL(__percpu_counter_add); -- 1.7.9.5 -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/