On Thu, Jun 01, 2017 at 02:46:32PM +0200, Christoph Hellwig wrote:
> On Thu, Jun 01, 2017 at 03:36:50PM +0300, Rakesh Pandit wrote:
> > Also Sagi pointed out that user space set_features ioctl if fired up
> > in a window after nvme_removal it can also result in this issue seems
> > to be correct.  I would prefer to keep this as it is and introduce
> > similar check higher up in nvme_ioctrl instead so that we don't send
> > sync commands if queues are killed already.
> > 
> > Would you prefer a patch ? Thanks,
> 
> If we want to kill everyone we probably should do it in ->queue_rq.

Looks ->queue_rq has done it already via checking nvmeq->cq_vector

> Or is the block layer blocking you somewhere else?

blk-mq doesn't handle dying in the I/O path.

Maybe it is similar with 806f026f9b901eaf1a(nvme: use blk_mq_start_hw_queues() 
in
nvme_kill_queues()), seems we need to do it for admin_q too.

Can the following change fix the issue?

diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index e44326d5cf19..360758488124 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -2438,6 +2438,7 @@ void nvme_kill_queues(struct nvme_ctrl *ctrl)
        struct nvme_ns *ns;
 
        mutex_lock(&ctrl->namespaces_mutex);
+       blk_mq_start_hw_queues(ctrl->admin_q);
        list_for_each_entry(ns, &ctrl->namespaces, list) {
                /*
                 * Revalidating a dead namespace sets capacity to 0. This will


Thanks,
Ming

Reply via email to