On 9/20/2015 2:04 AM, Santosh Shilimkar wrote:
This series addresses RDS connection bottlenecks on massive workloads and
improve the RDMA performance almost by 3X. RDS TCP also gets a small gain
of about 12%.
RDS is being used in massive systems with high scalability where several
hundred thousa
Hi Santosh,
Nice to see this consolidaton happening. I too don't have access to
iWARP hardware for RDS test but will use this series and convert our WIP
IB fastreg code and see how it goes.
I'm very pleased to hear about this WIP. Please feel free to share
anything you have (code and questions
since commit 96249d70dd70 ("IB/core:
Guarantee that a local_dma_lkey is available") The PD now
has a local_dma_lkey member which completely replaces
ib_get_dma_mr, use it instead.
In FRWR memreg mode, we assumed that the device local_dma_lkey
is available.
Signed-off-by: Sagi Grimberg
Cc: linux-
ConnectIB has some known issues with memory registration
using the local_dma_lkey (SEND, RDMA, RECV seems to work ok).
Thus don't expose support for it (and remove device->local_dma_lkey
setting).
since commit 96249d70dd70 ("IB/core: Guarantee that a local_dma_lkey
is available") addressed that by
The Connect-IB device has a specific issue with memory registration using
the reserved lkey (device global_dma_lkey). This caused user-space memory
registration which usually uses cached pre-registered memory keys to fail
due to a device access error during registration. kernel-space memory
registr
This module parameter forces memory registration even for
a continuous memory region. It is true by default as sending
an all-physical rkey with remote permissions might be insecure.
Signed-off-by: Sagi Grimberg
---
drivers/infiniband/ulp/iser/iscsi_iser.c | 5 +
drivers/infiniband/ulp/ise
Since mlx5 driver cannot rely on registration using the
reserved lkey (global_dma_lkey) it used to allocate a private
physical address lkey for each allocated pd.
Commit 96249d70dd70 ("IB/core: Guarantee that a local_dma_lkey is
available") just does it in the core layer so we can go ahead and
use
On 9/20/2015 12:52 PM, Sagi Grimberg wrote:
since commit 96249d70dd70 ("IB/core:
Guarantee that a local_dma_lkey is available") The PD now
has a local_dma_lkey member which completely replaces
ib_get_dma_mr, use it instead.
In FRWR memreg mode, we assumed that the device local_dma_lkey
is availa
It is possible that in a given poll_cq
call you end up getting on 1 completion, the other completion is
delayed due to some reason.
If a CQE is allowed to be delayed, how does polling
again guarantee that the consumer can retrieve it?
What happens if a signal occurs, there is only one CQE,
but
On 15/09/2015 06:45, Jason Gunthorpe wrote:
> No, I'm saying the resource pool is *well defined* and *fixed* by each
> hardware.
>
> The only question is how do we expose the N resource limits, the list
> of which is totally vendor specific.
I don't see why you say the limits are vendor specific.
On 9/17/2015 11:44 PM, Chuck Lever wrote:
The rb_send_bufs and rb_recv_bufs arrays are used to implement a
pair of stacks for keeping track of free rpcrdma_req and rpcrdma_rep
structs. Replace those arrays with free lists.
To allow more than 512 RPCs in-flight at once, each of these arrays
would
On 9/17/2015 11:46 PM, Chuck Lever wrote:
To support backward direction calls, I'm going to add an
svc_rdma_get_context() call in the client RDMA transport.
Called from ->buf_alloc(), we can't sleep waiting for memory.
So add an API that can get a server op_ctxt but won't sleep.
Signed-off-by:
hfi1_rc_hdrerr() stores the result of be32_to_cpu() into opcode, which
is a local variable declared as u8. Later this variable is used in a
24-bit logical right shift, which makes clang complains (when building
an allmodconfig kernel with LLVMLinux patches):
drivers/staging/rdma/hfi1/rc.c:239
On Sat, Sep 19, 2015 at 10:49:52PM +, Weiny, Ira wrote:
> >
> > On Fri, Sep 18, 2015 at 11:51:09AM -0400, Doug Ledford wrote:
> > > On 09/16/2015 02:22 AM, Dan Carpenter wrote:
> > > > __get_txreq() returns an ERR_PTR() but this checks for NULL so it
> > > > would oops on failure.
> > > >
> >
14 matches
Mail list logo