On 01/10/18 04:23 PM, Sagi Grimberg wrote:
>> I did not realize the namespace would be available at this time. I guess
>> I can give this a try, but it's going to be a fairly big change from
>> what's presented here... Though, I agree it'll probably be an
>> improvement.
>
> Thanks, if it turns
+/*
+ * If allow_p2pmem is set, we will try to use P2P memory for the SGL lists for
+ * Ι/O commands. This requires the PCI p2p device to be compatible with the
+ * backing device for every namespace on this controller.
+ */
+static void nvmet_setup_p2pmem(struct nvmet_ctrl *ctrl, struct nvmet_r
On 2018-10-01 3:34 p.m., Sagi Grimberg wrote:
>> +
>> +list_for_each_entry(ctrl, &subsys->ctrls, subsys_entry) {
>> +pci_p2pdma_remove_client(&ctrl->p2p_clients, nvmet_ns_dev(ns));
>> +nvmet_add_async_event(ctrl, NVME_AER_TYPE_NOTICE, 0, 0);
>
> Hi Logan, what is this
On 09/27/2018 09:54 AM, Logan Gunthorpe wrote:
We create a configfs attribute in each nvme-fabrics target port to
enable p2p memory use. When enabled, the port will only then use the
p2p memory if a p2p memory device can be found which is behind the
same switch hierarchy as the RDMA port and al
On 2018-09-27 11:12 AM, Keith Busch wrote:
> Reviewed-by: Keith Busch
Thanks for the reviews Keith!
Logan
___
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/listinfo/linux-nvdimm
On Thu, Sep 27, 2018 at 10:54:20AM -0600, Logan Gunthorpe wrote:
> We create a configfs attribute in each nvme-fabrics target port to
> enable p2p memory use. When enabled, the port will only then use the
> p2p memory if a p2p memory device can be found which is behind the
> same switch hierarchy a
We create a configfs attribute in each nvme-fabrics target port to
enable p2p memory use. When enabled, the port will only then use the
p2p memory if a p2p memory device can be found which is behind the
same switch hierarchy as the RDMA port and all the block devices in
use. If the user enabled it