If one vector is spread on several CPUs, usually the interrupt is only handled on one of these CPUs. Meantime, IO can be issued to the single hw queue from different CPUs concurrently, this way is easy to cause IRQ flood and CPU lockup.
Pass IRQF_RESCURE_THREAD in above case for asking genirq to handle interrupt in the rescurd thread when irq flood is detected. Cc: Long Li <[email protected]> Cc: Ingo Molnar <[email protected]>, Cc: Peter Zijlstra <[email protected]> Cc: Keith Busch <[email protected]> Cc: Jens Axboe <[email protected]> Cc: Christoph Hellwig <[email protected]> Cc: Sagi Grimberg <[email protected]> Cc: John Garry <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Hannes Reinecke <[email protected]> Cc: [email protected] Cc: [email protected] Signed-off-by: Ming Lei <[email protected]> --- drivers/nvme/host/pci.c | 17 +++++++++++++++-- 1 file changed, 15 insertions(+), 2 deletions(-) diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c index 45a80b708ef4..0b8d49470230 100644 --- a/drivers/nvme/host/pci.c +++ b/drivers/nvme/host/pci.c @@ -1501,8 +1501,21 @@ static int queue_request_irq(struct nvme_queue *nvmeq) return pci_request_irq(pdev, nvmeq->cq_vector, nvme_irq_check, nvme_irq, nvmeq, "nvme%dq%d", nr, nvmeq->qid); } else { - return pci_request_irq(pdev, nvmeq->cq_vector, nvme_irq, - NULL, nvmeq, "nvme%dq%d", nr, nvmeq->qid); + char *devname; + const struct cpumask *mask; + unsigned long irqflags = IRQF_SHARED; + int vector = pci_irq_vector(pdev, nvmeq->cq_vector); + + devname = kasprintf(GFP_KERNEL, "nvme%dq%d", nr, nvmeq->qid); + if (!devname) + return -ENOMEM; + + mask = pci_irq_get_affinity(pdev, nvmeq->cq_vector); + if (mask && cpumask_weight(mask) > 1) + irqflags |= IRQF_RESCUE_THREAD; + + return request_threaded_irq(vector, nvme_irq, NULL, irqflags, + devname, nvmeq); } } -- 2.20.1

