2018-03-29 4:47 GMT+09:00 Keith Busch :
> On Wed, Mar 28, 2018 at 10:06:46AM +0200, Christoph Hellwig wrote:
>> For PCIe devices the right policy is not a round robin but to use
>> the pcie device closer to the node. I did a prototype for that
>> long ago and the concept can work. Can you look in
ads. The queue depth for each thread
was 64.
Signed-off-by: Baegjae Sung
---
drivers/nvme/host/core.c | 49 +++
drivers/nvme/host/multipath.c | 45 ++-
drivers/nvme/host/nvme.h | 8 +++
3 files changed,
ree vendors of dual-port NVMe SSDs.
Signed-off-by: Baegjae Sung
---
drivers/nvme/host/core.c | 12 +++-
drivers/nvme/host/multipath.c | 15 ++-
2 files changed, 13 insertions(+), 14 deletions(-)
diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
in
2018-02-27 1:24 GMT+09:00 Keith Busch :
> On Mon, Feb 26, 2018 at 05:51:23PM +0900, baeg...@gmail.com wrote:
>> From: Baegjae Sung
>>
>> If multipathing is enabled, each NVMe subsystem creates a head
>> namespace (e.g., nvme0n1) and multiple private namespaces
>&g
From: Baegjae Sung
If multipathing is enabled, each NVMe subsystem creates a head
namespace (e.g., nvme0n1) and multiple private namespaces
(e.g., nvme0c0n1 and nvme0c1n1) in sysfs. When creating links for
private namespaces, links of head namespace are used, so the
namespace creation order must
5 matches
Mail list logo