Re: [PATCH v4 00/14] Copy Offload in NVMe Fabrics with P2P PCI Memory

2018-05-03 Thread Christian König
Am 02.05.2018 um 17:56 schrieb Logan Gunthorpe: Hi Christian, On 5/2/2018 5:51 AM, Christian König wrote: it would be rather nice to have if you could separate out the functions to detect if peer2peer is possible between two devices. This would essentially be pci_p2pdma_distance() in the exis

Re: [PATCH V3 7/8] nvme: pci: recover controller reliably

2018-05-03 Thread jianchao.wang
Hi ming On 05/03/2018 11:17 AM, Ming Lei wrote: > static int io_queue_depth_set(const char *val, const struct kernel_param *kp) > @@ -1199,7 +1204,7 @@ static enum blk_eh_timer_return nvme_timeout(struct > request *req, bool reserved) > if (nvme_should_reset(dev, csts)) { > n

[PATCH] brd: Mark as non-rotational

2018-05-03 Thread SeongJae Park
This commit sets QUEUE_FLAG_NONROT and clears up QUEUE_FLAG_ADD_RANDOM to mark the ramdisks as non-rotational device. Signed-off-by: SeongJae Park --- drivers/block/brd.c | 4 1 file changed, 4 insertions(+) diff --git a/drivers/block/brd.c b/drivers/block/brd.c index 66cb0f857f64..39c5b90

Re: [PATCH V3 7/8] nvme: pci: recover controller reliably

2018-05-03 Thread Ming Lei
On Thu, May 03, 2018 at 05:14:30PM +0800, jianchao.wang wrote: > Hi ming > > On 05/03/2018 11:17 AM, Ming Lei wrote: > > static int io_queue_depth_set(const char *val, const struct kernel_param > > *kp) > > @@ -1199,7 +1204,7 @@ static enum blk_eh_timer_return nvme_timeout(struct > > request *r

[PATCH v4 1/6] bcache: store disk name in struct cache and struct cached_dev

2018-05-03 Thread Coly Li
Current code uses bdevname() or bio_devname() to reference gendisk disk name when bcache needs to display the disk names in kernel message. It was safe before bcache device failure handling patch set merged in, because when devices are failed, there was deadlock to prevent bcache printing error mes

[PATCH v4 0/6] bcache device failure handling fixes for 4.17-rc4

2018-05-03 Thread Coly Li
Hi Jens, I receive bug reports from partners for the bcache cache device failure handling patch set (which is just merged into 4.17-rc1). Fortunately we are still in 4.17 merge window, I suggest to have these fixes to go into 4.17 merge window too. The patches are well commented IMHO and pass my

[PATCH v4 2/6] bcache: set CACHE_SET_IO_DISABLE in bch_cached_dev_error()

2018-05-03 Thread Coly Li
Commit c7b7bd07404c5 ("bcache: add io_disable to struct cached_dev") tries to stop bcache device by calling bcache_device_stop() when too many I/O errors happened on backing device. But if there is internal I/O happening on cache device (writeback scan, garbage collection, etc), a regular I/O reque

[PATCH v4 3/6] bcache: count backing device I/O error for writeback I/O

2018-05-03 Thread Coly Li
Commit c7b7bd07404c5 ("bcache: add io_disable to struct cached_dev") counts backing device I/O requets and set dc->io_disable to true if error counters exceeds dc->io_error_limit. But it only counts I/O errors for regular I/O request, neglects errors of write back I/Os when backing device is offlin

[PATCH v4 4/6] bcache: add wait_for_kthread_stop() in bch_allocator_thread()

2018-05-03 Thread Coly Li
When CACHE_SET_IO_DISABLE is set on cache set flags, bcache allocator thread routine bch_allocator_thread() may stop the while-loops and exit. Then it is possible to observe the following kernel oops message, [ 631.068366] bcache: bch_btree_insert() error -5 [ 631.069115] bcache: cached_dev_deta

[PATCH v3 6/6] bcache: use pr_info() to inform duplicated CACHE_SET_IO_DISABLE set

2018-05-03 Thread Coly Li
It is possible that multiple I/O requests hits on failed cache device or backing device, therefore it is quite common that CACHE_SET_IO_DISABLE is set already when a task tries to set the bit from bch_cache_set_error(). Currently the message "CACHE_SET_IO_DISABLE already set" is printed by pr_warn(

[PATCH v3 5/6] bcache: set dc->io_disable to true in conditional_stop_bcache_device()

2018-05-03 Thread Coly Li
Commit 7e027ca4b534b ("bcache: add stop_when_cache_set_failed option to backing device") adds stop_when_cache_set_failed option and stops bcache device if stop_when_cache_set_failed is auto and there is dirty data on broken cache device. There might exists a small time gap that the cache set is rel

[PATCH] block: add verifier for cmdline partition

2018-05-03 Thread Wang YanQing
I meet strange filesystem corruption issue recently, the reason is there are overlaps partitions in cmdline partition argument. This patch add verifier for cmdline partition, then if there are overlaps partitions, cmdline_partition will return error and log a error message. Signed-off-by: Wang Ya

Re: [PATCH 2/4] iomap: iomap_dio_rw() handles all sync writes

2018-05-03 Thread Jan Kara
On Wed 02-05-18 12:45:40, Dave Chinner wrote: > On Sat, Apr 21, 2018 at 03:03:09PM +0200, Jan Kara wrote: > > On Wed 18-04-18 14:08:26, Dave Chinner wrote: > > > From: Dave Chinner > > > > > > Currently iomap_dio_rw() only handles (data)sync write completions > > > for AIO. This means we can't op

Re: [PATCH 2/4] iomap: iomap_dio_rw() handles all sync writes

2018-05-03 Thread Jan Kara
On Wed 02-05-18 14:27:37, Robert Dorr wrote: > In the current implementation the first write to the location updates the > metadata and must issue the flush. In Windows SQL Server can avoid this > behavior. SQL Server can issue DeviceIoControl with SET_FILE_VALID_DATA > and then SetEndOfFile.

Re: [PATCH v4 0/6] bcache device failure handling fixes for 4.17-rc4

2018-05-03 Thread Jens Axboe
On 5/3/18 4:51 AM, Coly Li wrote: > Hi Jens, > > I receive bug reports from partners for the bcache cache device failure > handling patch set (which is just merged into 4.17-rc1). Fortunately we > are still in 4.17 merge window, I suggest to have these fixes to go into > 4.17 merge window too. We

Re: write call hangs in kernel space after virtio hot-remove

2018-05-03 Thread Jan Kara
On Wed 25-04-18 17:07:48, Fabiano Rosas wrote: > I'm looking into an issue where removing a virtio disk via sysfs while another > process is issuing write() calls results in the writing task going into a > livelock: > > > root@guest # cat test.sh > #!/bin/bash > > dd if=/dev/zero of=/dev/vda bs=

Re: [PATCH 1/3] fs: move documentation for thaw_super() where appropriate

2018-05-03 Thread Jan Kara
On Fri 20-04-18 16:59:02, Luis R. Rodriguez wrote: > On commit 08fdc8a0138a ("buffer.c: call thaw_super during emergency thaw") > Mateusz added thaw_super_locked() and made thaw_super() use it, but > forgot to move the documentation. > > Signed-off-by: Luis R. Rodriguez Looks good (modulo the --

Re: [PATCH 3/3] fs: fix corner case race on freeze_bdev() when sb disappears

2018-05-03 Thread Jan Kara
On Fri 20-04-18 16:59:04, Luis R. Rodriguez wrote: > freeze_bdev() will bail but leave the bd_fsfreeze_count incremented > if the get_active_super() does not find the superblock on our > super_blocks list to match. > > This issue has been present since v2.6.29 during the introduction of the > ioct

Re: [PATCH 2/3] fs: make thaw_super_locked() really just a helper

2018-05-03 Thread Jan Kara
On Fri 20-04-18 16:59:03, Luis R. Rodriguez wrote: > thaw_super_locked() was added via commit 08fdc8a0138a ("buffer.c: call > thaw_super during emergency thaw") merged on v4.17 to help with the > ability so that the caller can take charge of handling the s_umount lock, > however, it has left all* o

Re: INFO: task hung in wb_shutdown (2)

2018-05-03 Thread Jan Kara
On Wed 02-05-18 07:14:51, Tetsuo Handa wrote: > >From 1b90d7f71d60e743c69cdff3ba41edd1f9f86f93 Mon Sep 17 00:00:00 2001 > From: Tetsuo Handa > Date: Wed, 2 May 2018 07:07:55 +0900 > Subject: [PATCH v2] bdi: wake up concurrent wb_shutdown() callers. > > syzbot is reporting hung tasks at wait_on_bi

[PATCHSET 0/3] Add throttling for discards

2018-05-03 Thread Jens Axboe
I implemented support for discards in blk-wbt, so we treat them like background writes. If we have competing foreground IO, then we may throttle discards. Otherwise they should run at full speed. -- Jens Axboe

[PATCH 1/3] block: break discard submissions into the user defined size

2018-05-03 Thread Jens Axboe
Don't build discards bigger than what the user asked for, if the user decided to limit the size by writing to 'discard_max_bytes'. Signed-off-by: Jens Axboe --- block/blk-lib.c | 7 --- 1 file changed, 4 insertions(+), 3 deletions(-) diff --git a/block/blk-lib.c b/block/blk-lib.c index a676

[PATCH 3/3] blk-wbt: throttle discards like background writes

2018-05-03 Thread Jens Axboe
Throttle discards like we would any background write. Discards should be background activity, so if they are impacting foreground IO, then we will throttle them down. Signed-off-by: Jens Axboe --- block/blk-stat.h | 6 +++--- block/blk-wbt.c | 52 ++-

[PATCH 2/3] blk-wbt: account any writing command as a write

2018-05-03 Thread Jens Axboe
We currently special case WRITE and FLUSH, but we should really just include any command with the write bit set. This ensures that we account DISCARD. Signed-off-by: Jens Axboe --- block/blk-wbt.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/block/blk-wbt.c b/block/blk-wbt

Re: INFO: task hung in wb_shutdown (2)

2018-05-03 Thread Jens Axboe
On 5/1/18 4:14 PM, Tetsuo Handa wrote: >>From 1b90d7f71d60e743c69cdff3ba41edd1f9f86f93 Mon Sep 17 00:00:00 2001 > From: Tetsuo Handa > Date: Wed, 2 May 2018 07:07:55 +0900 > Subject: [PATCH v2] bdi: wake up concurrent wb_shutdown() callers. > > syzbot is reporting hung tasks at wait_on_bit(WB_shu

Re: [PATCH v4 0/6] bcache device failure handling fixes for 4.17-rc4

2018-05-03 Thread Coly Li
On 2018/5/3 10:34 PM, Jens Axboe wrote: > On 5/3/18 4:51 AM, Coly Li wrote: >> Hi Jens, >> >> I receive bug reports from partners for the bcache cache device failure >> handling patch set (which is just merged into 4.17-rc1). Fortunately we >> are still in 4.17 merge window, I suggest to have these

Re: [PATCH v4 0/6] bcache device failure handling fixes for 4.17-rc4

2018-05-03 Thread Jens Axboe
On 5/3/18 9:40 AM, Coly Li wrote: > On 2018/5/3 10:34 PM, Jens Axboe wrote: >> On 5/3/18 4:51 AM, Coly Li wrote: >>> Hi Jens, >>> >>> I receive bug reports from partners for the bcache cache device failure >>> handling patch set (which is just merged into 4.17-rc1). Fortunately we >>> are still in

Re: [PATCH V3 7/8] nvme: pci: recover controller reliably

2018-05-03 Thread jianchao.wang
Hi Ming Thanks for your kindly response. On 05/03/2018 06:08 PM, Ming Lei wrote: > nvme_eh_reset() can move on, if controller state is either CONNECTING or > RESETTING, nvme_change_ctrl_state(&dev->ctrl, NVME_CTRL_RESETTING) won't > be called in nvme_eh_reset(), and nvme_pre_reset_dev() will be c

Re: [PATCH v4 0/6] bcache device failure handling fixes for 4.17-rc4

2018-05-03 Thread Coly Li
On 2018/5/3 11:44 PM, Jens Axboe wrote: > On 5/3/18 9:40 AM, Coly Li wrote: >> On 2018/5/3 10:34 PM, Jens Axboe wrote: >>> On 5/3/18 4:51 AM, Coly Li wrote: Hi Jens, I receive bug reports from partners for the bcache cache device failure handling patch set (which is just merged

Re: [PATCH v4 00/14] Copy Offload in NVMe Fabrics with P2P PCI Memory

2018-05-03 Thread Logan Gunthorpe
On 03/05/18 03:05 AM, Christian König wrote: > Ok, I'm still missing the big picture here. First question is what is > the P2PDMA provider? Well there's some pretty good documentation in the patchset for this, but in short, a provider is a device that provides some kind of P2P resource (ie. BAR

Re: general protection fault in wb_workfn

2018-05-03 Thread Jan Kara
On Mon 23-04-18 19:09:51, Tetsuo Handa wrote: > On 2018/04/20 1:05, syzbot wrote: > > kasan: CONFIG_KASAN_INLINE enabled > > kasan: GPF could be caused by NULL-ptr deref or user memory access > > general protection fault: [#1] SMP KASAN > > Dumping ftrace buffer: > >    (ftrace buffer empty) >

Re: write call hangs in kernel space after virtio hot-remove

2018-05-03 Thread Jeff Layton
On Thu, 2018-05-03 at 16:42 +0200, Jan Kara wrote: > On Wed 25-04-18 17:07:48, Fabiano Rosas wrote: > > I'm looking into an issue where removing a virtio disk via sysfs while > > another > > process is issuing write() calls results in the writing task going into a > > livelock: > > > > > > root@

[PATCH] bdi: Fix oops in wb_workfn()

2018-05-03 Thread Jan Kara
Syzbot has reported that it can hit a NULL pointer dereference in wb_workfn() due to wb->bdi->dev being NULL. This indicates that wb_workfn() was called for an already unregistered bdi which should not happen as wb_shutdown() called from bdi_unregister() should make sure all pending writeback works

Re: testing io.low limit for blk-throttle

2018-05-03 Thread Paolo Valente
> Il giorno 26 apr 2018, alle ore 20:32, Tejun Heo ha scritto: > > Hello, > > On Tue, Apr 24, 2018 at 02:12:51PM +0200, Paolo Valente wrote: >> +Tejun (I guess he might be interested in the results below) > > Our experiments didn't work out too well either. At this point, it > isn't clear wh

Re: [PATCH v4 00/14] Copy Offload in NVMe Fabrics with P2P PCI Memory

2018-05-03 Thread Christian König
Am 03.05.2018 um 17:59 schrieb Logan Gunthorpe: On 03/05/18 03:05 AM, Christian König wrote: Second question is how to you want to handle things when device are not behind the same root port (which is perfectly possible in the cases I deal with)? I think we need to implement a whitelist. If bot

Re: write call hangs in kernel space after virtio hot-remove

2018-05-03 Thread Matthew Wilcox
On Thu, May 03, 2018 at 12:05:14PM -0400, Jeff Layton wrote: > On Thu, 2018-05-03 at 16:42 +0200, Jan Kara wrote: > > On Wed 25-04-18 17:07:48, Fabiano Rosas wrote: > > > I'm looking into an issue where removing a virtio disk via sysfs while > > > another > > > process is issuing write() calls res

[PATCH 1/2] loop: enable compat ioctl LOOP_SET_DIRECT_IO

2018-05-03 Thread Mikulas Patocka
Enable compat ioctl LOOP_SET_DIRECT_IO. Signed-off-by: Mikulas Patocka Fixes: ab1cb278bc70 ("block: loop: introduce ioctl command of LOOP_SET_DIRECT_IO") Cc: sta...@vger.kernel.org # v4.4+ --- drivers/block/loop.c |2 ++ 1 file changed, 2 insertions(+) Index: linux-2.6/drivers/block/

[PATCH 2/2] loop: enable compat ioctl LOOP_SET_BLOCK_SIZE

2018-05-03 Thread Mikulas Patocka
Enable compat ioctl LOOP_SET_BLOCK_SIZE. Signed-off-by: Mikulas Patocka Fixes: 89e4fdecb51c ("loop: add ioctl for changing logical block size") Cc: sta...@vger.kernel.org # v4.14+ --- drivers/block/loop.c |1 + 1 file changed, 1 insertion(+) Index: linux-2.6/drivers/block/loop.c =

[PATCH -next] zram: fix printk formats in zram_drv.c

2018-05-03 Thread Randy Dunlap
--- drivers/block/zram/zram_drv.c |2 +- 1 file changed, 1 insertion(+), 1 deletion(-) --- linux-next-20180503.orig/drivers/block/zram/zram_drv.c +++ linux-next-20180503/drivers/block/zram/zram_drv.c @@ -671,7 +671,7 @@ static ssize_t read_block_state(struct f ts

[PATCH v2] fs: Add aio iopriority support for block_dev

2018-05-03 Thread adam . manzanares
From: Adam Manzanares This is the per-I/O equivalent of the ioprio_set system call. When IOCB_FLAG_IOPRIO is set on the iocb aio_flags field, then we set the newly added kiocb ki_ioprio field to the value in the iocb aio_reqprio field. When a bio is created for an aio request by the block dev w

Re: [PATCH v2] fs: Add aio iopriority support for block_dev

2018-05-03 Thread Matthew Wilcox
On Thu, May 03, 2018 at 11:21:14AM -0700, adam.manzana...@wdc.com wrote: > If we want to avoid bloating struct kiocb, I suggest we turn the private > field > into a union of the private and ki_ioprio field. It seems like the users of > the private field all use it at a point where we can yank th

Re: [PATCH v2] fs: Add aio iopriority support for block_dev

2018-05-03 Thread Jeff Moyer
Hi, Adam, adam.manzana...@wdc.com writes: > From: Adam Manzanares > > This is the per-I/O equivalent of the ioprio_set system call. > > When IOCB_FLAG_IOPRIO is set on the iocb aio_flags field, then we set the > newly added kiocb ki_ioprio field to the value in the iocb aio_reqprio field. > > Wh

Re: [PATCH v4 00/14] Copy Offload in NVMe Fabrics with P2P PCI Memory

2018-05-03 Thread Logan Gunthorpe
On 03/05/18 11:29 AM, Christian König wrote: > Ok, that is the point where I'm stuck. Why do we need that in one > function call in the PCIe subsystem? > > The problem at least with GPUs is that we seriously don't have that > information here, cause the PCI subsystem might not be aware of all

Re: [PATCH v2] fs: Add aio iopriority support for block_dev

2018-05-03 Thread Adam Manzanares
On 5/3/18 11:33 AM, Matthew Wilcox wrote: > On Thu, May 03, 2018 at 11:21:14AM -0700, adam.manzana...@wdc.com wrote: >> If we want to avoid bloating struct kiocb, I suggest we turn the private >> field >> into a union of the private and ki_ioprio field. It seems like the users of >> the private

Re: [PATCH v2] fs: Add aio iopriority support for block_dev

2018-05-03 Thread Adam Manzanares
On 5/3/18 11:36 AM, Jeff Moyer wrote: > Hi, Adam, Hello Jeff, > > adam.manzana...@wdc.com writes: > >> From: Adam Manzanares >> >> This is the per-I/O equivalent of the ioprio_set system call. >> >> When IOCB_FLAG_IOPRIO is set on the iocb aio_flags field, then we set the >> newly added kioc

Re: [PATCH v2] fs: Add aio iopriority support for block_dev

2018-05-03 Thread Jens Axboe
On 5/3/18 2:15 PM, Adam Manzanares wrote: > > > On 5/3/18 11:33 AM, Matthew Wilcox wrote: >> On Thu, May 03, 2018 at 11:21:14AM -0700, adam.manzana...@wdc.com wrote: >>> If we want to avoid bloating struct kiocb, I suggest we turn the private >>> field >>> into a union of the private and ki_iopr

Re: [PATCH v2] fs: Add aio iopriority support for block_dev

2018-05-03 Thread Adam Manzanares
On 5/3/18 1:24 PM, Jens Axboe wrote: > On 5/3/18 2:15 PM, Adam Manzanares wrote: >> >> >> On 5/3/18 11:33 AM, Matthew Wilcox wrote: >>> On Thu, May 03, 2018 at 11:21:14AM -0700, adam.manzana...@wdc.com wrote: If we want to avoid bloating struct kiocb, I suggest we turn the private fiel

Re: [PATCH v2] fs: Add aio iopriority support for block_dev

2018-05-03 Thread Jens Axboe
On 5/3/18 2:58 PM, Adam Manzanares wrote: > > > On 5/3/18 1:24 PM, Jens Axboe wrote: >> On 5/3/18 2:15 PM, Adam Manzanares wrote: >>> >>> >>> On 5/3/18 11:33 AM, Matthew Wilcox wrote: On Thu, May 03, 2018 at 11:21:14AM -0700, adam.manzana...@wdc.com wrote: > If we want to avoid bloating

Re: [PATCH] bdi: Fix oops in wb_workfn()

2018-05-03 Thread Dave Chinner
On Thu, May 03, 2018 at 06:26:26PM +0200, Jan Kara wrote: > Syzbot has reported that it can hit a NULL pointer dereference in > wb_workfn() due to wb->bdi->dev being NULL. This indicates that > wb_workfn() was called for an already unregistered bdi which should not > happen as wb_shutdown() called

Re: [PATCH] bdi: Fix oops in wb_workfn()

2018-05-03 Thread Jens Axboe
On 5/3/18 3:55 PM, Dave Chinner wrote: > On Thu, May 03, 2018 at 06:26:26PM +0200, Jan Kara wrote: >> Syzbot has reported that it can hit a NULL pointer dereference in >> wb_workfn() due to wb->bdi->dev being NULL. This indicates that >> wb_workfn() was called for an already unregistered bdi which

Re: [PATCH] bdi: Fix oops in wb_workfn()

2018-05-03 Thread Tetsuo Handa
Jan Kara wrote: > Make wb_workfn() use wakeup_wb() for requeueing the work which takes all > the necessary precautions against racing with bdi unregistration. Yes, this patch will solve NULL pointer dereference bug. But is it OK to leave list_empty(&wb->work_list) == false situation? Who takes ove

Re: [PATCH v2] fs: Add aio iopriority support for block_dev

2018-05-03 Thread Matthew Wilcox
On Thu, May 03, 2018 at 02:24:58PM -0600, Jens Axboe wrote: > On 5/3/18 2:15 PM, Adam Manzanares wrote: > > On 5/3/18 11:33 AM, Matthew Wilcox wrote: > >> Or we could just make ki_hint a u8 or u16 ... seems unlikely we'll need > >> 32 bits of ki_hint. (currently defined values are 1-5) > > > > I

Re: [PATCH v2] fs: Add aio iopriority support for block_dev

2018-05-03 Thread Jens Axboe
On 5/3/18 4:43 PM, Matthew Wilcox wrote: > On Thu, May 03, 2018 at 02:24:58PM -0600, Jens Axboe wrote: >> On 5/3/18 2:15 PM, Adam Manzanares wrote: >>> On 5/3/18 11:33 AM, Matthew Wilcox wrote: Or we could just make ki_hint a u8 or u16 ... seems unlikely we'll need 32 bits of ki_hint. (c

[PATCH] loop: add recursion validation to LOOP_CHANGE_FD

2018-05-03 Thread Theodore Ts'o
Refactor the validation code used in LOOP_SET_FD so it is also used in LOOP_CHANGE_FD. Otherwise it is possible to construct a set of loop devices that all refer to each other. This can lead to a infinite loop in starting with "while (is_loop_device(f)) .." in loop_set_fd(). Fix this by refactor

Re: [PATCH V3 7/8] nvme: pci: recover controller reliably

2018-05-03 Thread Ming Lei
On Thu, May 03, 2018 at 11:46:56PM +0800, jianchao.wang wrote: > Hi Ming > > Thanks for your kindly response. > > On 05/03/2018 06:08 PM, Ming Lei wrote: > > nvme_eh_reset() can move on, if controller state is either CONNECTING or > > RESETTING, nvme_change_ctrl_state(&dev->ctrl, NVME_CTRL_RESETT

Re: [PATCH V3 7/8] nvme: pci: recover controller reliably

2018-05-03 Thread jianchao.wang
Hi ming On 05/04/2018 12:24 PM, Ming Lei wrote: >> Just invoke nvme_dev_disable in nvme_error_handler context and hand over the >> other things >> to nvme_reset_work as the v2 patch series seems clearer. > That way may not fix the current race: nvme_dev_disable() will > quiesce/freeze queue again

Re: [PATCH V3 7/8] nvme: pci: recover controller reliably

2018-05-03 Thread jianchao.wang
Oh sorry. On 05/04/2018 02:10 PM, jianchao.wang wrote: > nvme_error_handler should invoke nvme_reset_ctrl instead of introducing > another interface. > Then it is more convenient to ensure that there will be only one resetting > instance running. ctrl state is still in RESETTING state, nvme_res