> On Nov 11, 2015, at 11:29 AM, Sagi Grimberg wrote:
>
>
>
>> An alternate explanation is that the provider is not setting
>> device->max_sge_rd properly. rdma_read_chunks_lcl() seems to
>> be the only thing in my copy of the kernel tree that relies on
>> that value.
>>
>> I’ve reproduced the
On Wed, Nov 11, 2015 at 11:25:06AM +0200, Sagi Grimberg wrote:
> For RDMA READs, a HCA will perform the memory scatters when on the RX
> path, when receiving the read responses containing the data. This means
> that the HCA needs to perform a lookup of the relevant scatter entries
> upon each read
On Wed, Nov 11, 2015 at 12:02:25AM -0800, Christoph Hellwig wrote:
> > No need for the else, !rdma_cap_needs_rdma_read_mr means pd->local_dma_lkey
> > is okay
> > to use.
>
> What would happen if someone tried to use NFS on usnic without this?
Honestly, I have no idea how usnic fits into the ker
An alternate explanation is that the provider is not setting
device->max_sge_rd properly. rdma_read_chunks_lcl() seems to
be the only thing in my copy of the kernel tree that relies on
that value.
I’ve reproduced the local length errors with CX-2 and CX-3
Pro, which both set that field to 32.
Hi Sagi-
> On Nov 11, 2015, at 4:28 AM, Sagi Grimberg wrote:
>
>> I’d like to see our NFS server use the local DMA lkey where it
>> makes sense, to avoid the cost of registering and invalidating
>> memory.
>>
>> I have to agree with Tom that once the device’s s/g limit is
>> exceeded, the serv
I’d like to see our NFS server use the local DMA lkey where it
makes sense, to avoid the cost of registering and invalidating
memory.
I have to agree with Tom that once the device’s s/g limit is
exceeded, the server has to post an RDMA Read WR every few
pages, and appears to get expensive for
Jason,
It is always acceptable to use a lkey MR instead of the local dma
lkey, but ULPs should prefer to use the local dma lkey if possible,
for performance reasons.
I don't necessarily agree with this statement (at least with the second
part of it), the world is not always perfect.
For RDMA
On Tue, Nov 10, 2015 at 04:55:51PM -0700, Jason Gunthorpe wrote:
> IMHO, the break point for when it makes sense to switch from a RDMA
> READ chain to a MR is going to be a RDMA core tunable. The performance
> curve won't have much to do with the ULP.
core/device, a lot of it depends on when we'd
On Tue, Nov 10, 2015 at 11:25:46AM -0700, Jason Gunthorpe wrote:
> I like this, my only comment is we should have a rdma_cap for this
> behavior, rdma_cap_needs_rdma_read_mr(pd) or something?
Yes, that's better than checking the protocol.
> > + if (!(dev->device_cap_flags &
> > IB_DEVIC
On Tue, Nov 10, 2015 at 06:01:01PM -0500, Chuck Lever wrote:
> The current mechanism of statically choosing either FRMR or
> local DMA depending on the device is probably not adequate.
> Maybe we could post all the Read WRs via a single chain? Or
> stick with FRMR when the amount of data to read is
> On Nov 10, 2015, at 4:30 PM, Tom Talpey wrote:
>
> On 11/10/2015 4:17 PM, Jason Gunthorpe wrote:
>> On Tue, Nov 10, 2015 at 04:00:43PM -0500, Tom Talpey wrote:
>>
>>> Hmm, agreed, but it must still be acceptable to perform a registration
>>> instead of using the local_dma_lkey. As mentioned e
On 11/10/2015 4:17 PM, Jason Gunthorpe wrote:
On Tue, Nov 10, 2015 at 04:00:43PM -0500, Tom Talpey wrote:
Hmm, agreed, but it must still be acceptable to perform a registration
instead of using the local_dma_lkey. As mentioned earlier, there are
scatter limits and other implications for all-phy
On Tue, Nov 10, 2015 at 04:00:43PM -0500, Tom Talpey wrote:
> Hmm, agreed, but it must still be acceptable to perform a registration
> instead of using the local_dma_lkey. As mentioned earlier, there are
> scatter limits and other implications for all-physical addressing that
> an upper layer may
On 11/10/2015 1:25 PM, Jason Gunthorpe wrote:
On Tue, Nov 10, 2015 at 04:04:32AM -0800, Christoph Hellwig wrote:
On Tue, Nov 10, 2015 at 01:46:40PM +0200, Sagi Grimberg wrote:
On 10/11/2015 13:41, Christoph Hellwig wrote:
Oh, and while we're at it. Can someone explain why we're even
using r
On Tue, Nov 10, 2015 at 04:04:32AM -0800, Christoph Hellwig wrote:
> On Tue, Nov 10, 2015 at 01:46:40PM +0200, Sagi Grimberg wrote:
> >
> >
> > On 10/11/2015 13:41, Christoph Hellwig wrote:
> > >Oh, and while we're at it. Can someone explain why we're even
> > >using rdma_read_chunk_frmr for IB?
> On Nov 10, 2015, at 6:38 AM, Christoph Hellwig wrote:
>
> On Tue, Nov 10, 2015 at 12:44:14PM +0200, Sagi Grimberg wrote:
>> --- a/net/sunrpc/xprtrdma/svc_rdma_recvfrom.c
>> +++ b/net/sunrpc/xprtrdma/svc_rdma_recvfrom.c
>> @@ -238,7 +238,7 @@ int rdma_read_chunk_frmr(struct svcxprt_rdma *xprt,
On 11/10/2015 5:46 AM, Sagi Grimberg wrote:
On 10/11/2015 13:41, Christoph Hellwig wrote:
Oh, and while we're at it. Can someone explain why we're even
using rdma_read_chunk_frmr for IB? It seems to work around the
fact tat iWarp only allow a single RDMA READ SGE, but it's used
whenever the
On 11/10/2015 9:34 AM, Chuck Lever wrote:
2. Is there a reason IB is not using the allphys MR for RDMA Read?
That would be much more efficient. On the NFS server that MR isn't
exposed, thus it is safe to use.
True, but only if the layout of the buffer's physical pages do not
exceed the local RD
> On Nov 10, 2015, at 7:04 AM, Christoph Hellwig wrote:
>
>> On Tue, Nov 10, 2015 at 01:46:40PM +0200, Sagi Grimberg wrote:
>>
>>
>>> On 10/11/2015 13:41, Christoph Hellwig wrote:
>>> Oh, and while we're at it. Can someone explain why we're even
>>> using rdma_read_chunk_frmr for IB? It seem
On Tue, Nov 10, 2015 at 01:46:40PM +0200, Sagi Grimberg wrote:
>
>
> On 10/11/2015 13:41, Christoph Hellwig wrote:
> >Oh, and while we're at it. Can someone explain why we're even
> >using rdma_read_chunk_frmr for IB? It seems to work around the
> >fact tat iWarp only allow a single RDMA READ S
On 10/11/2015 13:41, Christoph Hellwig wrote:
Oh, and while we're at it. Can someone explain why we're even
using rdma_read_chunk_frmr for IB? It seems to work around the
fact tat iWarp only allow a single RDMA READ SGE, but it's used
whenever the device has IB_DEVICE_MEM_MGT_EXTENSIONS, whic
On 10/11/2015 13:38, Christoph Hellwig wrote:
On Tue, Nov 10, 2015 at 12:44:14PM +0200, Sagi Grimberg wrote:
--- a/net/sunrpc/xprtrdma/svc_rdma_recvfrom.c
+++ b/net/sunrpc/xprtrdma/svc_rdma_recvfrom.c
@@ -238,7 +238,7 @@ int rdma_read_chunk_frmr(struct svcxprt_rdma *xprt,
read = min_t(
Oh, and while we're at it. Can someone explain why we're even
using rdma_read_chunk_frmr for IB? It seems to work around the
fact tat iWarp only allow a single RDMA READ SGE, but it's used
whenever the device has IB_DEVICE_MEM_MGT_EXTENSIONS, which seems
wrong.
--
To unsubscribe from this list: s
On Tue, Nov 10, 2015 at 12:44:14PM +0200, Sagi Grimberg wrote:
> --- a/net/sunrpc/xprtrdma/svc_rdma_recvfrom.c
> +++ b/net/sunrpc/xprtrdma/svc_rdma_recvfrom.c
> @@ -238,7 +238,7 @@ int rdma_read_chunk_frmr(struct svcxprt_rdma *xprt,
> read = min_t(int, nents << PAGE_SHIFT, rs_length);
>
>
Instead of hard-coding remote access (which is not secured
issue in IB).
Signed-off-by: Sagi Grimberg
---
net/sunrpc/xprtrdma/svc_rdma_recvfrom.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/net/sunrpc/xprtrdma/svc_rdma_recvfrom.c
b/net/sunrpc/xprtrdma/svc_rdma_recvfrom.c
25 matches
Mail list logo