>http://nvmexpress.org/wp-content/uploads/NVM-Express-1.3-Ratified-TPs.zip
@Keith - my apologies.
@Christoph - thanks for the link
So my understanding of when the technical content surrounding new NVMe
Technical Proposals (TPs) was wrong. I though the TP content could only be
discussed
On 02/03/18 09:18 AM, Jason Gunthorpe wrote:
This allocator is already seems not useful for the P2P target memory
on a Mellanox NIC due to the way it has a special allocation flow
(windowing) and special usage requirements..
Nor can it be usefull for the doorbell memory in the NIC.
No one
On Thu, Mar 01, 2018 at 11:53:16PM +, Stephen Bates wrote:
> > There's a meaningful difference between writing to an NVMe CMB vs PMR
>
> When the PMR spec becomes public we can discuss how best to integrate it into
> the P2P framework (if at all) ;-).
> We don't want to lump these all together without knowing which region you're
> allocating from, right?
In all seriousness I do agree with you on these Keith in the long term. We
would consider adding property flags for the memory as it is added to the p2p
core and then the allocator could
> There's a meaningful difference between writing to an NVMe CMB vs PMR
When the PMR spec becomes public we can discuss how best to integrate it into
the P2P framework (if at all) ;-).
Stephen
___
Linux-nvdimm mailing list
On 01/03/18 04:49 PM, Keith Busch wrote:
On Thu, Mar 01, 2018 at 11:00:51PM +, Stephen Bates wrote:
P2P is about offloading the memory and PCI subsystem of the host CPU
and this is achieved no matter which p2p_dev is used.
Even within a device, memory attributes for its various
On Thu, Mar 01, 2018 at 11:00:51PM +, Stephen Bates wrote:
>
> P2P is about offloading the memory and PCI subsystem of the host CPU
> and this is achieved no matter which p2p_dev is used.
Even within a device, memory attributes for its various regions may not be
the same. There's a
> No, locality matters. If you have a bunch of NICs and bunch of drives
> and the allocator chooses to put all P2P memory on a single drive your
> performance will suck horribly even if all the traffic is offloaded.
Sagi brought this up earlier in his comments about the _find_ function.
On 01/03/18 04:20 PM, Jason Gunthorpe wrote:
On Thu, Mar 01, 2018 at 11:00:51PM +, Stephen Bates wrote:
No, locality matters. If you have a bunch of NICs and bunch of drives
and the allocator chooses to put all P2P memory on a single drive your
performance will suck horribly even if
>> We'd prefer to have a generic way to get p2pmem instead of restricting
>> ourselves to only using CMBs. We did work in the past where the P2P memory
>> was part of an IB adapter and not the NVMe card. So this won't work if it's
>> an NVMe only interface.
> It just seems like it it
On 01/03/18 03:45 PM, Jason Gunthorpe wrote:
I can appreciate you might have some special use case for that, but it
absolutely should require special configuration and not just magically
happen.
Well if driver doesn't want someone doing p2p transfers with the memory
it shouldn't publish it
On 01/03/18 11:42 AM, Jason Gunthorpe wrote:
On Thu, Mar 01, 2018 at 08:35:55PM +0200, Sagi Grimberg wrote:
This is also why I don't entirely understand why this series has a
generic allocator for p2p mem, it makes little sense to me.
Why wouldn't the nmve driver just claim the entire CMB
Wouldn't it all be simpler if the p2p_dev resolution would be private
to the namespace?
So is adding some all the namespaces in a subsystem must comply to
using p2p? Seems a little bit harsh if its not absolutely needed. Would
be nice to export a subsystems between two ports (on two HCAs,
> I agree, I don't think this series should target anything other than
> using p2p memory located in one of the devices expected to participate
> in the p2p trasnaction for a first pass..
I disagree. There is definitely interest in using a NVMe CMB as a bounce buffer
and in deploying
On 01/03/18 04:03 AM, Sagi Grimberg wrote:
Can you describe what would be the plan to have it when these devices
do come along? I'd say that p2p_dev needs to become a nvmet_ns reference
and not from nvmet_ctrl. Then, when cmb capable devices come along, the
ns can prefer to use its own cmb
On 01/03/18 04:03 AM, Sagi Grimberg wrote:
Can you describe what would be the plan to have it when these devices
do come along? I'd say that p2p_dev needs to become a nvmet_ns reference
and not from nvmet_ctrl. Then, when cmb capable devices come along, the
ns can prefer to use its own cmb
> > Ideally, we'd want to use an NVME CMB buffer as p2p memory. This would
> > save an extra PCI transfer as the NVME card could just take the data
> > out of it's own memory. However, at this time, cards with CMB buffers
> > don't seem to be available.
> Can you describe what would be the plan
We create a configfs attribute in each nvme-fabrics target port to
enable p2p memory use. When enabled, the port will only then use the
p2p memory if a p2p memory device can be found which is behind the
same switch as the RDMA port and all the block devices in use. If
the user enabled it an no
We create a configfs attribute in each nvme-fabrics target port to
enable p2p memory use. When enabled, the port will only then use the
p2p memory if a p2p memory device can be found which is behind the
same switch as the RDMA port and all the block devices in use. If
the user enabled it an no
19 matches
Mail list logo