On Sun, Nov 22, 2015 at 06:46:48PM +0100, Christoph Hellwig wrote:
> While IB supports the notion of returning separate local and remote keys
> from a memory registration, the iWarp spec doesn't and neither does any
> of our in-tree HCA drivers [1] nor consumers. Consolidate the in-kernel
> API
On Mon, Nov 23, 2015 at 11:58:29AM -0700, Jason Gunthorpe wrote:
> > +#define IB_REG_LKEY(ib_reg_scope_t)0x
> > +#define IB_REG_RKEY(ib_reg_scope_t)0x0001
>
> Wrap in () just for convention?
Ok.
> Maybe
>
> unsigned int acc = ib_scope_to_access(scope);
> if
On 11/22/2015 04:28 PM, Or Gerlitz wrote:
> On Mon, Nov 16, 2015, Matan Barak wrote:
>> On Thu, Oct 15, 2015 , Matan Barak wrote:
>
>>> Hi Doug,
>>> This series adds the support for RoCE v2. In order to support RoCE v2,
>>> we add gid_type
On Mon, Nov 23, 2015 at 05:17:30PM +0200, Sagi Grimberg wrote:
>
>> I send 1-9 out separately earlier :) The other two sit on top of them
>> and they are prep patches in a sense as they remove a lot of users
>> of struct ib_mr that i don't have to modify in patches 10 and 11.
>
> Still, patches
On Mon, Nov 23, 2015 at 12:41:24PM -0700, Jason Gunthorpe wrote:
> I like this too, but, I'm a little worried this makes the API more
> confusing - ideally, we'd get rid of all the IB_ACCESS stuff from
> within the kernel completely.
That's my plan - at least for MRs. The only place still using
On Tue, Nov 17, 2015 at 11:41:39AM +0200, Sagi Grimberg wrote:
>
> >On 11/16/2015 6:37 PM, Sagi Grimberg wrote:
> >>+++ b/drivers/infiniband/ulp/iser/iser_memory.c
> >>@@ -250,7 +250,7 @@ iser_reg_dma(struct iser_device *device, struct
> >>iser_data_buf *mem,
> >> struct scatterlist *sg =
On Wed, Nov 18, 2015 at 10:27:41PM +0200, Yuval Shaia wrote:
> > You need private-data exchange to negotiate the feature.
> >
> > The feature should be a per-packet csum status header.
> >
> > When sending a skb that is already fully csumed the receiver sets
> > CHECKSUM_UNNECESSARY.
> >
> >
On Sat, Nov 14, 2015 at 08:08:49AM +0100, Christoph Hellwig wrote:
> On Fri, Nov 13, 2015 at 11:25:13AM -0700, Jason Gunthorpe wrote:
> > For instance, like this, not fulling draining the cq and then doing:
> >
> > > + completed = __ib_process_cq(cq, budget);
> > > + if (completed < budget) {
> >
Christoph,
This series removes huge chunks of code related to old memory
registration methods that we don't use anymore, and then simplifies the
current memory registration API
Let's split out patches 10,11 from this set because these patches are
logically completely different from the rest
So Maybe we should have:
void ib_drain_qp(struct ib_qp *qp)
Christoph suggested that this flushing would be taken care
of by rdma_disconnect which sounds even better I think..
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to
That won't work for iWARP. Is this code new? I didn't see any errors that
would result from this code when I tested iSER over
cxgb4 with the old iwarp support patches.
Steve,
I think I figured out why this works with iWARP.
For iWARP, rdma_disconnect() calls iw_cm_disconnect() with
On Mon, Nov 23, 2015 at 11:03:42AM +0200, Sagi Grimberg wrote:
> Christoph,
>
>> This series removes huge chunks of code related to old memory
>> registration methods that we don't use anymore, and then simplifies the
>> current memory registration API
>
> Let's split out patches 10,11 from this
On Mon, Nov 23, 2015 at 12:35:44PM +0200, Sagi Grimberg wrote:
>
>> So Maybe we should have:
>> void ib_drain_qp(struct ib_qp *qp)
>
> Christoph suggested that this flushing would be taken care
> of by rdma_disconnect which sounds even better I think..
Note that will only work once we've
> -Original Message-
> From: Sagi Grimberg [mailto:sa...@dev.mellanox.co.il]
> Sent: Monday, November 23, 2015 4:29 AM
> To: Steve Wise; 'Christoph Hellwig'; linux-rdma@vger.kernel.org
> Cc: bart.vanass...@sandisk.com; ax...@fb.com; linux-s...@vger.kernel.org;
>
> -Original Message-
> From: linux-kernel-ow...@vger.kernel.org
> [mailto:linux-kernel-ow...@vger.kernel.org] On Behalf Of Sagi Grimberg
> Sent: Monday, November 23, 2015 4:36 AM
> To: Steve Wise; 'Christoph Hellwig'; linux-rdma@vger.kernel.org
> Cc: bart.vanass...@sandisk.com;
physical's ro_unmap is synchronous already. The new ro_unmap_sync
method just has to DMA unmap all MRs associated with the RPC
request.
Signed-off-by: Chuck Lever
---
net/sunrpc/xprtrdma/physical_ops.c | 13 +
1 file changed, 13 insertions(+)
diff --git
FRWR's ro_unmap is asynchronous. The new ro_unmap_sync posts
LOCAL_INV Work Requests and waits for them to complete before
returning.
Note also, DMA unmapping is now done _after_ invalidation.
Signed-off-by: Chuck Lever
---
net/sunrpc/xprtrdma/frwr_ops.c | 137
The root of the problem was that sends (especially unsignalled
FASTREG and LOCAL_INV Work Requests) were not properly flow-
controlled, which allowed a send queue overrun.
Now that the RPC/RDMA reply handler waits for invalidation to
complete, the send queue is properly flow-controlled. Thus this
FMR's ro_unmap method is already synchronous because ib_unmap_fmr()
is a synchronous verb. However, some improvements can be made here.
1. Gather all the MRs for the RPC request onto a list, and invoke
ib_unmap_fmr() once with that list. This reduces the number of
doorbells when there is
There is a window between the time the RPC reply handler wakes the
waiting RPC task and when xprt_release() invokes ops->buf_free.
During this time, memory regions containing the data payload may
still be accessed by a broken or malicious server, but the RPC
application has already been allowed
Extra resources for handling backchannel requests have to be
pre-allocated when a transport instance is created. Set a limit.
Signed-off-by: Chuck Lever
---
include/linux/sunrpc/svc_rdma.h |5 +
net/sunrpc/xprtrdma/svc_rdma_transport.c |6 +-
2
For 4.5, I'd like to address the send queue accounting and
invalidation/unmap ordering issues Jason brought up a couple of
months ago. Here's a first shot at that.
Also available in the "nfs-rdma-for-4.5" topic branch of this git repo:
git://git.linux-nfs.org/projects/cel/cel-2.6.git
Or for
To support the server-side of an NFSv4.1 backchannel on RDMA
connections, add a transport class for backwards direction
operation.
Signed-off-by: Chuck Lever
---
include/linux/sunrpc/xprt.h |1
net/sunrpc/xprt.c|1
Minor optimization: when dealing with write chunk XDR roundup, do
not post a Write WR for the zero bytes in the pad. Simply update
the write segment in the RPC-over-RDMA header to reflect the extra
pad bytes.
The Reply chunk is also a write chunk, but the server does not use
send_write_chunks()
To support the NFSv4.1 backchannel on RDMA connections, add a
mechanism for sending a backwards-direction RPC/RDMA call on a
connection established by a client.
Signed-off-by: Chuck Lever
---
include/linux/sunrpc/svc_rdma.h |2 +
To support backward direction calls, I'm going to add an
svc_rdma_get_context() call in the client RDMA transport.
Called from ->buf_alloc(), we can't sleep waiting for memory.
So add an API that can get a server op_ctxt but won't sleep.
Signed-off-by: Chuck Lever
---
Here are patches to support server-side bi-directional RPC/RDMA
operation (to enable NFSv4.1 on RPC/RDMA transports). These still
need testing, but they are ready for initial review.
Also available in the "nfsd-rdma-for-4.5" topic branch of this git repo:
On 11/23/2015 02:18 PM, Jason Gunthorpe wrote:
On Mon, Nov 23, 2015 at 01:54:05PM -0800, Bart Van Assche wrote:
What I don't see is how SRP handles things when the
sendq fills up, ie the case where __srp_get_tx_iu() == NULL. It looks
like the driver starts to panic and generates printks. I can't
For FRWR FASTREG and LOCAL_INV, move the ib_*_wr structure off
the stack. This allows frwr_op_map and frwr_op_unmap to chain
WRs together without limit to register or invalidate a set of MRs
with a single ib_post_send().
(This will be for chaining LOCAL_INV requests).
Signed-off-by: Chuck Lever
In the current xprtrdma implementation, some memreg strategies
implement ro_unmap synchronously (the MR is knocked down before the
method returns) and some asynchonously (the MR will be knocked down
and returned to the pool in the background).
To guarantee the MR is truly invalid before the RPC
I'm about to add code in the RPC/RDMA reply handler between the
xprt_lookup_rqst() and xprt_complete_rqst() call site that needs
to execute outside of spinlock critical sections.
Add a hook to remove an rpc_rqst from the pending list once
the transport knows its going to invoke
Rarely, senders post a Send that is larger than the client's inline
threshold. That can be due to a bug, or the client and server may
not have communicated about their inline limits. RPC-over-RDMA
currently doesn't specify any particular limit on inline size, so
peers have to guess what it is.
It
On Mon, Nov 23, 2015 at 01:54:05PM -0800, Bart Van Assche wrote:
> Not really ... Please have a look at the SRP initiator source code. What the
> SRP initiator does is to poll the send queue before sending a new
> SCSI
I see that. What I don't see is how SRP handles things when the
sendq fills
Minor optimization: Instead of counting WRs in a chain, have callers
pass in the number of WRs they've prepared.
Signed-off-by: Chuck Lever
---
include/linux/sunrpc/svc_rdma.h |2 +-
net/sunrpc/xprtrdma/svc_rdma_recvfrom.c |9 ++---
To support the NFSv4.1 backchannel on RDMA connections, add a
capability for receiving an RPC/RDMA reply on a connection
established by a client.
Signed-off-by: Chuck Lever
---
net/sunrpc/xprtrdma/rpc_rdma.c | 76 +++
Clean up: The access_flags field is not used outside of
rdma_read_chunk_frmr() and is always set to the same value.
Signed-off-by: Chuck Lever
---
include/linux/sunrpc/svc_rdma.h |1 -
net/sunrpc/xprtrdma/svc_rdma_recvfrom.c |3 +--
2 files changed, 1
On Mon, Nov 23, 2015 at 02:33:05PM -0800, Bart Van Assche wrote:
> On 11/23/2015 02:18 PM, Jason Gunthorpe wrote:
> >On Mon, Nov 23, 2015 at 01:54:05PM -0800, Bart Van Assche wrote:
> >What I don't see is how SRP handles things when the
> >sendq fills up, ie the case where __srp_get_tx_iu() ==
On 11/23/2015 01:28 PM, Jason Gunthorpe wrote:
On Mon, Nov 23, 2015 at 01:04:25PM -0800, Bart Van Assche wrote:
Considerable time ago the send queue in the SRP initiator driver was
modified from signaled to non-signaled to reduce the number of interrupts
triggered by the SRP initiator driver.
On Mon, Nov 23, 2015 at 07:53:04PM -0500, Chuck Lever wrote:
> > Wait, the REMOTE_WRITE is there to support iWARP, but it isn't
> > needed for IB or RoCE. Shouldn't this be updated to peek at those
> > new attributes to decide, instead of remaining unconditional?
>
> That???s coming in another
On Mon, Nov 23, 2015 at 05:14:14PM -0500, Chuck Lever wrote:
> In the current xprtrdma implementation, some memreg strategies
> implement ro_unmap synchronously (the MR is knocked down before the
> method returns) and some asynchonously (the MR will be knocked down
> and returned to the pool in
On Mon, Nov 23, 2015 at 07:57:42PM -0500, Tom Talpey wrote:
> On 11/23/2015 5:14 PM, Chuck Lever wrote:
> >FMR's ro_unmap method is already synchronous because ib_unmap_fmr()
> >is a synchronous verb. However, some improvements can be made here.
>
> I thought FMR support was about to be removed
> +struct svc_rdma_op_ctxt *svc_rdma_get_context_gfp(struct svcxprt_rdma *xprt,
> + gfp_t flags)
> +{
> + struct svc_rdma_op_ctxt *ctxt;
> +
> + ctxt = kmem_cache_alloc(svc_rdma_ctxt_cachep, flags);
> + if (!ctxt)
> + return
On Mon, Nov 23, 2015 at 06:35:28PM -0800, Caitlin Bestler wrote:
> Is it possible for an IB HCA to transmit a response on a QP and not
> in that packet or a previous packet acknowledge something that it
> has delivered to the user?
AFAIK, the rules of ack coalescing do not interact with the send
On Mon, Nov 23, 2015 at 10:52:26PM -0800, Christoph Hellwig wrote:
>
> So at lest for 4.5 we're unlikely to be able to get rid of it alone
> due to the RDS issue. We'll then need performance numbers for mlx4,
> and figure out how much we care about mthca.
mthca is unfortunately very popular in
On 11/23/2015 5:20 PM, Chuck Lever wrote:
Extra resources for handling backchannel requests have to be
pre-allocated when a transport instance is created. Set a limit.
Signed-off-by: Chuck Lever
---
include/linux/sunrpc/svc_rdma.h |5 +
> On Nov 23, 2015, at 7:39 PM, Tom Talpey wrote:
>
> On 11/23/2015 5:20 PM, Chuck Lever wrote:
>> Extra resources for handling backchannel requests have to be
>> pre-allocated when a transport instance is created. Set a limit.
>>
>> Signed-off-by: Chuck Lever
On 11/23/2015 8:09 PM, Chuck Lever wrote:
On Nov 23, 2015, at 7:39 PM, Tom Talpey wrote:
On 11/23/2015 5:20 PM, Chuck Lever wrote:
Extra resources for handling backchannel requests have to be
pre-allocated when a transport instance is created. Set a limit.
Signed-off-by:
> On Nov 23, 2015, at 8:19 PM, Tom Talpey wrote:
>
> On 11/23/2015 8:09 PM, Chuck Lever wrote:
>>
>>> On Nov 23, 2015, at 7:39 PM, Tom Talpey wrote:
>>>
>>> On 11/23/2015 5:20 PM, Chuck Lever wrote:
Extra resources for handling backchannel requests have
On Mon, Nov 23, 2015 at 8:19 PM, Tom Talpey wrote:
> On 11/23/2015 8:09 PM, Chuck Lever wrote:
>>
>>
>>> On Nov 23, 2015, at 7:39 PM, Tom Talpey wrote:
>>>
>>> On 11/23/2015 5:20 PM, Chuck Lever wrote:
Extra resources for handling backchannel requests
On Mon, Nov 23, 2015 at 07:34:53PM -0500, Tom Talpey wrote:
> Been there, seen that. Bluescreened on it, mysteriously.
Yes, me too :(
Jason
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majord...@vger.kernel.org
More majordomo info at
On 11/23/2015 5:21 PM, Chuck Lever wrote:
To support the server-side of an NFSv4.1 backchannel on RDMA
connections, add a transport class for backwards direction
operation.
So, what's special here is that it re-uses an existing forward
channel's connection? If not, it would seem unnecessary to
On 11/23/2015 5:13 PM, Chuck Lever wrote:
Rarely, senders post a Send that is larger than the client's inline
threshold. That can be due to a bug, or the client and server may
not have communicated about their inline limits. RPC-over-RDMA
currently doesn't specify any particular limit on inline
On 11/23/2015 8:16 PM, Chuck Lever wrote:
On Nov 23, 2015, at 7:55 PM, Tom Talpey wrote:
On 11/23/2015 5:13 PM, Chuck Lever wrote:
Rarely, senders post a Send that is larger than the client's inline
threshold. That can be due to a bug, or the client and server may
not have
On Mon, Nov 23, 2015 at 03:30:42PM -0800, Caitlin Bestler wrote:
>The receive completion can be safely assumed to indicate transmit
>completion over a reliable connection unless your peer has gone
>completely bonkers and is replying to a command that it did not
>receive.
Perhaps
On 11/23/2015 7:00 PM, Jason Gunthorpe wrote:
On Mon, Nov 23, 2015 at 03:30:42PM -0800, Caitlin Bestler wrote:
The receive completion can be safely assumed to indicate transmit
completion over a reliable connection unless your peer has gone
completely bonkers and is replying to a
> On Nov 23, 2015, at 7:52 PM, Tom Talpey wrote:
>
> On 11/23/2015 5:21 PM, Chuck Lever wrote:
>> Clean up: The access_flags field is not used outside of
>> rdma_read_chunk_frmr() and is always set to the same value.
>>
>> Signed-off-by: Chuck Lever
>>
On 11/23/2015 5:14 PM, Chuck Lever wrote:
There is a window between the time the RPC reply handler wakes the
waiting RPC task and when xprt_release() invokes ops->buf_free.
During this time, memory regions containing the data payload may
still be accessed by a broken or malicious server, but the
On 11/22/2015 05:37 AM, Christoph Hellwig wrote:
On Tue, Nov 10, 2015 at 12:35:05PM +0200, Sagi Grimberg wrote:
Are you planning to pick this up? Note that this patch
is stable material as well.
Doug? any plans for this patch?
We should really get this in an into -stable. Bart, can you
> On Nov 23, 2015, at 7:55 PM, Tom Talpey wrote:
>
> On 11/23/2015 5:13 PM, Chuck Lever wrote:
>> Rarely, senders post a Send that is larger than the client's inline
>> threshold. That can be due to a bug, or the client and server may
>> not have communicated about their inline
On 11/23/2015 5:20 PM, Chuck Lever wrote:
To support the NFSv4.1 backchannel on RDMA connections, add a
capability for receiving an RPC/RDMA reply on a connection
established by a client.
Signed-off-by: Chuck Lever
---
net/sunrpc/xprtrdma/rpc_rdma.c | 76
On 11/23/2015 5:21 PM, Chuck Lever wrote:
Clean up: The access_flags field is not used outside of
rdma_read_chunk_frmr() and is always set to the same value.
Signed-off-by: Chuck Lever
---
include/linux/sunrpc/svc_rdma.h |1 -
On 11/23/2015 5:14 PM, Chuck Lever wrote:
FMR's ro_unmap method is already synchronous because ib_unmap_fmr()
is a synchronous verb. However, some improvements can be made here.
I thought FMR support was about to be removed in the core.
1. Gather all the MRs for the RPC request onto a
On 11/22/2015 07:31 AM, Christoph Hellwig wrote:
On Sun, Nov 22, 2015 at 05:26:28PM +0200, Sagi Grimberg wrote:
No. register_always=Y is already broken in 4.3, but register_always=N is
now also broken in 4.4.
OK, I'm confused so please let me understand slowly :)
Your patch "ib_srp:
> On Nov 23, 2015, at 8:22 PM, Tom Talpey wrote:
>
> On 11/23/2015 8:16 PM, Chuck Lever wrote:
>>
>>> On Nov 23, 2015, at 7:55 PM, Tom Talpey wrote:
>>>
>>> On 11/23/2015 5:13 PM, Chuck Lever wrote:
Rarely, senders post a Send that is larger than the
I'll have to think about whether I agree with that as a
protocol statement.
Chunks in a reply are there to account for the data that is
handled in the chunk of a request. So it kind of comes down
to whether RDMA is allowed (or used) on the backchannel. I
still think that is fundamentally an
> On Nov 23, 2015, at 7:44 PM, Tom Talpey wrote:
>
> On 11/23/2015 5:20 PM, Chuck Lever wrote:
>> To support the NFSv4.1 backchannel on RDMA connections, add a
>> capability for receiving an RPC/RDMA reply on a connection
>> established by a client.
>>
>> Signed-off-by: Chuck
On 11/23/2015 8:36 PM, Chuck Lever wrote:
On Nov 23, 2015, at 8:19 PM, Tom Talpey wrote:
On 11/23/2015 8:09 PM, Chuck Lever wrote:
On Nov 23, 2015, at 7:39 PM, Tom Talpey wrote:
On 11/23/2015 5:20 PM, Chuck Lever wrote:
Extra resources for handling
On 11/23/2015 4:00 PM, Jason Gunthorpe wrote:
On Mon, Nov 23, 2015 at 03:30:42PM -0800, Caitlin Bestler wrote:
The receive completion can be safely assumed to indicate transmit
completion over a reliable connection unless your peer has gone
completely bonkers and is replying to a
On Mon, Nov 23, 2015 at 10:45:56PM -0800, Christoph Hellwig wrote:
> On Mon, Nov 23, 2015 at 05:14:14PM -0500, Chuck Lever wrote:
> > In the current xprtrdma implementation, some memreg strategies
> > implement ro_unmap synchronously (the MR is knocked down before the
> > method returns) and some
I send 1-9 out separately earlier :) The other two sit on top of them
and they are prep patches in a sense as they remove a lot of users
of struct ib_mr that i don't have to modify in patches 10 and 11.
Still, patches 10,11 are not really a part of this patchset.
I think they need to stand
On Mon, Nov 23, 2015 at 10:09:05AM -0500, Chuck Lever wrote:
> Out of curiosity, why are you keeping the IB_ACCESS flags?
We'll still need them for all kinds of other use cases
(ib_get_dma_mr, userspace MRs, qp_access_flags).
> It would be more efficient for providers to convert the
> scope
On 11/22/2015 11:46 AM, Christoph Hellwig wrote:
This series removes huge chunks of code related to old memory
registration methods that we don't use anymore, and then simplifies the
current memory registration API
This expects my "IB: merge struct ib_device_attr into struct ib_device"
patch to
> On Nov 22, 2015, at 12:46 PM, Christoph Hellwig wrote:
>
> Instead of the confusing IB spec values provide a flags argument that
> describes:
>
> a) the operation we perform the memory registration for, and
> b) if we want to access it for read or write purposes.
>
> This
On Fri, Nov 20, 2015 at 11:04:12AM -0800, Bart Van Assche wrote:
> Ensure that validate_ipv4_net_dev() calls rcu_read_unlock() if
> fib_lookup() fails. Detected by sparse. Compile-tested only.
>
> Fixes: "IB/cma: Validate routing of incoming requests" (commit f887f2ac87c2).
> Cc: Haggai Eran
On Sun, Nov 22, 2015 at 06:46:49PM +0100, Christoph Hellwig wrote:
> Instead of the confusing IB spec values provide a flags argument that
> describes:
>
> a) the operation we perform the memory registration for, and
> b) if we want to access it for read or write purposes.
>
> This helps to
On Sun, Nov 15, 2015 at 07:05:53PM +0100, Christoph Hellwig wrote:
> This series removes huge chunks of code related to old memory
> registration methods that we don't use anymore.
>
> This expects my "IB: merge struct ib_device_attr into struct ib_device"
> patch to be already applied.
>
> Also
On 11/23/2015 12:37 PM, Jason Gunthorpe wrote:
On Sat, Nov 14, 2015 at 08:13:44AM +0100, Christoph Hellwig wrote:
On Fri, Nov 13, 2015 at 03:06:36PM -0700, Jason Gunthorpe wrote:
Looking at that thread and then at the patch a bit more..
+void ib_process_cq_direct(struct ib_cq *cq)
[..]
+
On Mon, Nov 23, 2015 at 01:01:36PM -0700, Jason Gunthorpe wrote:
> Okay, having now read the whole thing, I think I see the flow now. I don't
> see any holes in the above, other than it is doing a bit more work
> than it needs in some edges cases because it doesn't know if the CQ is
> actually
> + /* Use the hint from IP Stack to select GID Type */
> + network_gid_type = ib_network_to_gid_type(addr->dev_addr.network);
> + if (addr->dev_addr.network != RDMA_NETWORK_IB) {
> + route->path_rec->gid_type = network_gid_type;
> + /* TODO: get the hoplimit
On Thu, Oct 15, 2015 at 07:07:06PM +0300, Matan Barak wrote:
> This patch set adds attributes of net device and gid type to each GID
> in the GID table. Users that use verbs directly need to specify
> the GID index. Since the same GID could have different types or
> associated net devices, users
On Thu, Oct 15, 2015 at 07:07:10PM +0300, Matan Barak wrote:
> Users would like to control the behaviour of rdma_cm.
> For example, old applications which don't set the
> required RoCE gid type could be executed on RoCE V2
> network types. In order to support this configuration,
> we implement a
On Thu, Oct 15, 2015 at 07:07:12PM +0300, Matan Barak wrote:
> diff --git a/include/rdma/ib_sa.h b/include/rdma/ib_sa.h
> index 0a40ed2..5bea0e8 100644
> +++ b/include/rdma/ib_sa.h
> @@ -206,6 +206,9 @@ struct ib_sa_mcmember_rec {
> u8 scope;
> u8 join_state;
>
On Mon, Nov 23, 2015 at 01:04:25PM -0800, Bart Van Assche wrote:
> Considerable time ago the send queue in the SRP initiator driver was
> modified from signaled to non-signaled to reduce the number of interrupts
> triggered by the SRP initiator driver. The SRP initiator driver polls the
> send
83 matches
Mail list logo