On Wed, Mar 28, 2018 at 09:32:14AM +0200, Christoph Hellwig wrote:
> On Tue, Mar 27, 2018 at 09:39:08AM -0600, Keith Busch wrote:
> > - return blk_mq_pci_map_queues(set, to_pci_dev(dev->dev), 0);
> > + return blk_mq_pci_map_queues(set, to_pci_dev(dev->dev),
> > +
On Tue, Mar 27, 2018 at 09:39:08AM -0600, Keith Busch wrote:
> - return blk_mq_pci_map_queues(set, to_pci_dev(dev->dev), 0);
> + return blk_mq_pci_map_queues(set, to_pci_dev(dev->dev),
> + dev->num_vecs > 1);
Can you turn this into:
- return blk_mq_p
On Tue, Mar 27, 2018 at 09:39:08AM -0600, Keith Busch wrote:
> The admin and first IO queues shared the first irq vector, which has an
> affinity mask including cpu0. If a system allows cpu0 to be offlined,
> the admin queue may not be usable if no other CPUs in the affinity mask
> are online. This
The admin and first IO queues shared the first irq vector, which has an
affinity mask including cpu0. If a system allows cpu0 to be offlined,
the admin queue may not be usable if no other CPUs in the affinity mask
are online. This is a problem since unlike IO queues, there is only
one admin queue t
> +static inline unsigned int nvme_ioq_vector(struct nvme_dev *dev,
> + unsigned int qid)
No need for the inline here I think.
> +{
> + /*
> + * A queue's vector matches the queue identifier unless the controller
> + * has only one vector available.
> + */
> + r
From: Jianchao Wang
The admin and first IO queues shared the first irq vector, which has an
affinity mask including cpu0. If a system allows cpu0 to be offlined,
the admin queue may not be usable if no other CPUs in the affinity mask
are online. This is a problem since unlike IO queues, there is