I'll test next week, but 4.19 has the same problem, how to fix that for 4.19?

Huacai
 
------------------ Original ------------------
From:  "Thomas Gleixner"<t...@linutronix.de>;
Date:  Thu, Feb 14, 2019 04:50 PM
To:  "Keith Busch"<keith.bu...@intel.com>;
Cc:  "Bjorn Helgaas"<helg...@kernel.org>; "Jens Axboe"<ax...@kernel.dk>; "Sagi 
Grimberg"<s...@grimberg.me>; "linux-pci"<linux-...@vger.kernel.org>; 
"LKML"<linux-kernel@vger.kernel.org>; 
"linux-nvme"<linux-n...@lists.infradead.org>; "Ming Lei"<ming....@redhat.com>; 
"linux-block"<linux-bl...@vger.kernel.org>; "Christoph Hellwig"<h...@lst.de>; 
"Huacai Chen"<che...@lemote.com>;
Subject:  Re: [PATCH V3 1/5] genirq/affinity: don't mark 'affd' as const
 
On Wed, 13 Feb 2019, Keith Busch wrote:

Cc+ Huacai Chen

> On Wed, Feb 13, 2019 at 10:41:55PM +0100, Thomas Gleixner wrote:
> > Btw, while I have your attention. There popped up an issue recently related
> > to that affinity logic.
> > 
> > The current implementation fails when:
> > 
> >         /*
> >          * If there aren't any vectors left after applying the pre/post
> >          * vectors don't bother with assigning affinity.
> > */
> > if (nvecs == affd->pre_vectors + affd->post_vectors)
> >     return NULL;
> > 
> > Now the discussion arised, that in that case the affinity sets are not
> > allocated and filled in for the pre/post vectors, but somehow the
> > underlying device still works and later on triggers the warning in the
> > blk-mq code because the MSI entries do not have affinity information
> > attached.
> >
> > Sure, we could make that work, but there are several issues:
> > 
> >     1) irq_create_affinity_masks() has another reason to return NULL:
> >        memory allocation fails.
> > 
> >     2) Does it make sense at all.
> > 
> > Right now the PCI allocator ignores the NULL return and proceeds without
> > setting any affinities. As a consequence nothing is managed and everything
> > happens to work.
> > 
> > But that happens to work is more by chance than by design and the warning
> > is bogus if this is an expected mode of operation.
> > 
> > We should address these points in some way.
> 
> Ah, yes, that's a mistake in the nvme driver. It is assuming IO queues are
> always on managed interrupts, but that's not true if when only 1 vector
> could be allocated. This should be an appropriate fix to the warning:

Looks correct. Chen, can you please test that?

> ---
> diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
> index 022ea1ee63f8..f2ccebe1c926 100644
> --- a/drivers/nvme/host/pci.c
> +++ b/drivers/nvme/host/pci.c
> @@ -506,7 +506,7 @@ static int nvme_pci_map_queues(struct blk_mq_tag_set *set)
>  * affinity), so use the regular blk-mq cpu mapping
>  */
>  map->queue_offset = qoff;
> -     if (i != HCTX_TYPE_POLL)
> +     if (i != HCTX_TYPE_POLL && dev->num_vecs > 1)
>  blk_mq_pci_map_queues(map, to_pci_dev(dev->dev), offset);
>  else
>  blk_mq_map_queues(map);
> --
>

Reply via email to