On Wed, Nov 28, 2018 at 09:26:55AM -0700, Keith Busch wrote:
> ---
> diff --git a/drivers/nvme/target/loop.c b/drivers/nvme/target/loop.c
> index 9908082b32c4..116398b240e5 100644
> --- a/drivers/nvme/target/loop.c
> +++ b/drivers/nvme/target/loop.c
> @@ -428,10 +428,14 @@ static int nvme_loop_configure_admin_queue(struct 
> nvme_loop_ctrl *ctrl)
>  static void nvme_loop_shutdown_ctrl(struct nvme_loop_ctrl *ctrl)
>  {
>       if (ctrl->ctrl.queue_count > 1) {
> -             nvme_stop_queues(&ctrl->ctrl);
> -             blk_mq_tagset_busy_iter(&ctrl->tag_set,
> -                                     nvme_cancel_request, &ctrl->ctrl);
> +             /*
> +              * The back end device driver is responsible for completing all
> +              * entered requests
> +              */
> +             nvme_start_freeze(&ctrl->ctrl);
> +             nvme_wait_freeze(&ctrl->ctrl);
>               nvme_loop_destroy_io_queues(ctrl);
> +             nvme_unfreeze(&ctrl->ctrl);
>       }
>  
>       if (ctrl->ctrl.state == NVME_CTRL_LIVE)
> ---

The above tests fine with io and nvme resets on a target nvme loop
backed by null_blk, but I also couldn't recreate the original report
without the patch.

Ming,

Could you tell me a little more how you set it up? I'm just configuring
null_blk with queue_mode=2 irqmode=2. Anything else to recreate?

Thanks,
Keith 

Reply via email to