RE: [[PATCH v1] 12/37] [CIFS] SMBD: Handle send completion from CQ

2017-08-14 Thread Long Li


> -Original Message-
> From: Christoph Hellwig [mailto:h...@infradead.org]
> Sent: Sunday, August 13, 2017 3:20 AM
> To: Long Li 
> Cc: Steve French ; linux-c...@vger.kernel.org; samba-
> techni...@lists.samba.org; linux-kernel@vger.kernel.org; Long Li
> 
> Subject: Re: [[PATCH v1] 12/37] [CIFS] SMBD: Handle send completion from
> CQ
> 
> You seem to be doing memory allocations and frees for every packet on the
> write.  At least for other RDMA protocols that would have been a major
> performance issue.

The size of SGE array passed to IB is unknown, so I don't know how much to 
pre-allocate in advance. But it seems this size is not big when passed down 
from CIFS. I will look at pre-allocating buffer if this is an issue.

> 
> Do you have any performance numbers and/or profiles of the code?

I will look into profiling.


Re: [[PATCH v1] 12/37] [CIFS] SMBD: Handle send completion from CQ

2017-08-13 Thread Christoph Hellwig
You seem to be doing memory allocations and frees for every packet on
the write.  At least for other RDMA protocols that would have been
a major performance issue.  

Do you have any performance numbers and/or profiles of the code?


[[PATCH v1] 12/37] [CIFS] SMBD: Handle send completion from CQ

2017-08-02 Thread Long Li
From: Long Li 

In preparation for handling sending SMBD requests, add code to handle the send 
completion. In send complemention, the SMBD transport is responsible for 
freeing resources used in send.

Signed-off-by: Long Li 
---
 fs/cifs/cifsrdma.c | 25 +
 1 file changed, 25 insertions(+)

diff --git a/fs/cifs/cifsrdma.c b/fs/cifs/cifsrdma.c
index 20237b7..ecbc832 100644
--- a/fs/cifs/cifsrdma.c
+++ b/fs/cifs/cifsrdma.c
@@ -197,6 +197,31 @@ cifs_rdma_qp_async_error_upcall(struct ib_event *event, 
void *context)
}
 }
 
+/* Called in softirq, when a RDMA send is donea */
+static void send_done(struct ib_cq *cq, struct ib_wc *wc)
+{
+   int i;
+   struct cifs_rdma_request *request =
+   container_of(wc->wr_cqe, struct cifs_rdma_request, cqe);
+
+   log_rdma_send("cifs_rdma_request %p completed wc->status=%d\n",
+   request, wc->status);
+
+   if (wc->status != IB_WC_SUCCESS || wc->opcode != IB_WC_SEND) {
+   log_rdma_send("wc->status=%d wc->opcode=%d\n",
+   wc->status, wc->opcode);
+   }
+
+   for (i=0; inum_sge; i++)
+   ib_dma_unmap_single(request->info->id->device,
+   request->sge[i].addr,
+   request->sge[i].length,
+   DMA_TO_DEVICE);
+
+   kfree(request->sge);
+   mempool_free(request, request->info->request_mempool);
+}
+
 /* Called from softirq, when recv is done */
 static void recv_done(struct ib_cq *cq, struct ib_wc *wc)
 {
-- 
2.7.4