On 4/14/26 3:33 PM, John Garry wrote:
Hi Nilay,
I think so, but we will need scsi to maintain such a count internally to
support this policy. And for NVMe we will need some abstraction to lookup the
per-controller QD for a mpath_device.
This raises another question regarding the current framework. From what I can
see, all NVMe multipath I/O policies are currently supported for SCSI as well.
Going forward, if we introduce a new I/O policy for NVMe that does not make
sense for SCSI, how can we ensure that the new policy is supported only for
NVMe and not for SCSI? Conversely, we may also want to introduce a policy that
is relevant only for SCSI but not for NVMe.
With the current framework, it seems difficult to restrict a policy to a
specific transport. It appears that all policies are implicitly shared between
NVMe and SCSI.
Would it make sense to introduce some abstraction for I/O policies in the
framework so that a given policy can be implemented and exposed only for the
relevant transport (e.g., NVMe-only or SCSI-only), rather than requiring it to
be supported by both?
I am just coming back to this now....
about the queue-depth iopolicy, why is depth per controller and not per NS
(path)? The following does not mention:
https://lore.kernel.org/linux-nvme/[email protected]/
Is the idea that some controller may have another NS attached and have traffic
there, and we need to account according to this also?
Yes, the idea is that congestion should be evaluated at the controller level
rather than per-namespace.
In NVMe, multiple namespaces can be attached to the same controller, and all of
them share the same
transport path and I/O queue resources (submission and completion queues). As a
result, any contention
or congestion is fundamentally observed at the controller, and not at an
individual namespace.
If we were to track queue depth per namespace, it could give a misleading view
of the actual load on
the underlying path, since multiple namespaces may be contributing to the same
set of queues. In contrast,
tracking queue depth per controller provides a more accurate representation of
the total outstanding I/O
and the level of congestion on that path.
In a multipath configuration, this allows us to compare controllers directly.
For example, if one controller
has a lower queue depth than another, it is likely experiencing less contention
and may offer lower latency,
making it a better candidate for forwarding I/O.
Thanks,
--Nilay