But interestingly, with my "mptest" link failure test
(test_01_nvme_offline) I'm not actually seeing NVMe trigger a failure
that needs a multipath layer (be it NVMe multipath or DM multipath) to
fail a path and retry the IO. The pattern is that the link goes down,
and nvme waits for it to come
On Tue, Dec 19 2017 at 4:05pm -0500,
Mike Snitzer wrote:
> Like NVMe's native multipath support, DM multipath's NVMe bio-based
> support now allows NVMe core's error handling to requeue an NVMe blk-mq
> request's bios onto DM multipath's queued_bios list for resubmission
> once fail_path() occur
On Wed, Dec 20 2017 at 5:21am -0500,
kbuild test robot wrote:
> Hi Scott,
>
> I love your patch! Yet something to improve:
>
> [auto build test ERROR on dm/for-next]
> [also build test ERROR on v4.15-rc4]
> [cannot apply to next-20171220]
> [if your patch is appli
Hi Scott,
I love your patch! Yet something to improve:
[auto build test ERROR on dm/for-next]
[also build test ERROR on v4.15-rc4]
[cannot apply to next-20171220]
[if your patch is applied to the wrong git tree, please drop us a note to help
improve the system]
url:
https://github.com/0day
Hi Scott,
I love your patch! Yet something to improve:
[auto build test ERROR on dm/for-next]
[also build test ERROR on v4.15-rc4]
[cannot apply to next-20171220]
[if your patch is applied to the wrong git tree, please drop us a note to help
improve the system]
url:
https://github.com/0day
[dm-thin] Fix bug in btree_split_beneath()
When inserting a new key/value pair into a btree we walk down the spine of
btree nodes performing the following 2 operations:
i) space for a new entry
ii) adjusting the first key entry if the new key is lower than any in the
node.
If the _root_ nod