Re: linux-next: manual merge of the block tree with the rdma tree
On 6/3/2020 2:32 AM, Jason Gunthorpe wrote: On Wed, Jun 03, 2020 at 01:40:51AM +0300, Max Gurtovoy wrote: On 6/3/2020 12:37 AM, Jens Axboe wrote: On 6/2/20 1:09 PM, Jason Gunthorpe wrote: On Tue, Jun 02, 2020 at 01:02:55PM -0600, Jens Axboe wrote: On 6/2/20 1:01 PM, Jason Gunthorpe wrote: On Tue, Jun 02, 2020 at 11:37:26AM +0300, Max Gurtovoy wrote: On 6/2/2020 5:56 AM, Stephen Rothwell wrote: Hi all, Hi, This looks good to me. Can you share a pointer to the tree so we'll test it in our labs ? need to re-test: 1. srq per core 2. srq per core + T10-PI And both will run with shared CQ. Max, this is too much conflict to send to Linus between your own patches. I am going to drop the nvme part of this from RDMA. Normally I don't like applying partial series, but due to this tree split, you can send the rebased nvme part through the nvme/block tree at rc1 in two weeks.. Yes, I'll send it in 2 weeks. Actually I hoped the iSER patches for CQ pool will be sent in this series but eventually they were not. This way we could have taken only the iser part and the new API. I saw the pulled version too late since I wasn't CCed to it and it was already merged before I had a chance to warn you about possible conflict. I think in general we should try to add new RDMA APIs first with iSER/SRP and avoid conflicting trees. If you are careful we can construct a shared branch and if Jens/etc is willing he can pull the RDMA base code after RDMA merges the branch and then apply the nvme parts. This is how things work with netdev It is tricky and you have to plan for it during your submission step, but we should be able to manage in most cases if this comes up more often. I think we can construct a branch like this for dedicated series and delete it after the acceptance. In case of new APIs for RDMA that involve touching NVMe stuff - we'll create this branch and ask Jens to pull it as you suggested. And as a general note, I suggest we won't merge NVMe/RDMA stuff to rdma-next without cooperation with Jens. -Max. Jason
Re: linux-next: manual merge of the block tree with the rdma tree
On Wed, Jun 03, 2020 at 01:40:51AM +0300, Max Gurtovoy wrote: > > On 6/3/2020 12:37 AM, Jens Axboe wrote: > > On 6/2/20 1:09 PM, Jason Gunthorpe wrote: > > > On Tue, Jun 02, 2020 at 01:02:55PM -0600, Jens Axboe wrote: > > > > On 6/2/20 1:01 PM, Jason Gunthorpe wrote: > > > > > On Tue, Jun 02, 2020 at 11:37:26AM +0300, Max Gurtovoy wrote: > > > > > > On 6/2/2020 5:56 AM, Stephen Rothwell wrote: > > > > > > > Hi all, > > > > > > Hi, > > > > > > > > > > > > This looks good to me. > > > > > > > > > > > > Can you share a pointer to the tree so we'll test it in our labs ? > > > > > > > > > > > > need to re-test: > > > > > > > > > > > > 1. srq per core > > > > > > > > > > > > 2. srq per core + T10-PI > > > > > > > > > > > > And both will run with shared CQ. > > > > > Max, this is too much conflict to send to Linus between your own > > > > > patches. I am going to drop the nvme part of this from RDMA. > > > > > > > > > > Normally I don't like applying partial series, but due to this tree > > > > > split, you can send the rebased nvme part through the nvme/block tree > > > > > at rc1 in two weeks.. > > Yes, I'll send it in 2 weeks. > > Actually I hoped the iSER patches for CQ pool will be sent in this series > but eventually they were not. > > This way we could have taken only the iser part and the new API. > > I saw the pulled version too late since I wasn't CCed to it and it was > already merged before I had a chance to warn you about possible conflict. > > I think in general we should try to add new RDMA APIs first with iSER/SRP > and avoid conflicting trees. If you are careful we can construct a shared branch and if Jens/etc is willing he can pull the RDMA base code after RDMA merges the branch and then apply the nvme parts. This is how things work with netdev It is tricky and you have to plan for it during your submission step, but we should be able to manage in most cases if this comes up more often. Jason
Re: linux-next: manual merge of the block tree with the rdma tree
On 6/3/2020 12:37 AM, Jens Axboe wrote: On 6/2/20 1:09 PM, Jason Gunthorpe wrote: On Tue, Jun 02, 2020 at 01:02:55PM -0600, Jens Axboe wrote: On 6/2/20 1:01 PM, Jason Gunthorpe wrote: On Tue, Jun 02, 2020 at 11:37:26AM +0300, Max Gurtovoy wrote: On 6/2/2020 5:56 AM, Stephen Rothwell wrote: Hi all, Hi, This looks good to me. Can you share a pointer to the tree so we'll test it in our labs ? need to re-test: 1. srq per core 2. srq per core + T10-PI And both will run with shared CQ. Max, this is too much conflict to send to Linus between your own patches. I am going to drop the nvme part of this from RDMA. Normally I don't like applying partial series, but due to this tree split, you can send the rebased nvme part through the nvme/block tree at rc1 in two weeks.. Yes, I'll send it in 2 weeks. Actually I hoped the iSER patches for CQ pool will be sent in this series but eventually they were not. This way we could have taken only the iser part and the new API. I saw the pulled version too late since I wasn't CCed to it and it was already merged before I had a chance to warn you about possible conflict. I think in general we should try to add new RDMA APIs first with iSER/SRP and avoid conflicting trees. Was going to comment that this is probably how it should have been done to begin with. If we have multiple conflicts like that between two trees, someone is doing something wrong... Well, on the other hand having people add APIs in one tree and then (promised) consumers in another tree later on has proven problematic in the past. It is best to try to avoid that, but in this case I don't think Max will have any delay to get the API consumer into nvme in two weeks. Having conflicting trees is a problem. If there's a dependency for two trees for some new work, then just have a separate branch that's built on those two. For NVMe core work, then it should include the pending NVMe changes. I guess it's hard to do so during merge window since the block and rdma trees are not in sync. I think it would have been a good idea to add Jens to CC and mention that we're posting code that is maintained by 2 different trees in the cover latter.
Re: linux-next: manual merge of the block tree with the rdma tree
On 6/2/20 1:09 PM, Jason Gunthorpe wrote: > On Tue, Jun 02, 2020 at 01:02:55PM -0600, Jens Axboe wrote: >> On 6/2/20 1:01 PM, Jason Gunthorpe wrote: >>> On Tue, Jun 02, 2020 at 11:37:26AM +0300, Max Gurtovoy wrote: On 6/2/2020 5:56 AM, Stephen Rothwell wrote: > Hi all, Hi, This looks good to me. Can you share a pointer to the tree so we'll test it in our labs ? need to re-test: 1. srq per core 2. srq per core + T10-PI And both will run with shared CQ. >>> >>> Max, this is too much conflict to send to Linus between your own >>> patches. I am going to drop the nvme part of this from RDMA. >>> >>> Normally I don't like applying partial series, but due to this tree >>> split, you can send the rebased nvme part through the nvme/block tree >>> at rc1 in two weeks.. >> >> Was going to comment that this is probably how it should have been >> done to begin with. If we have multiple conflicts like that between >> two trees, someone is doing something wrong... > > Well, on the other hand having people add APIs in one tree and then > (promised) consumers in another tree later on has proven problematic > in the past. It is best to try to avoid that, but in this case I don't > think Max will have any delay to get the API consumer into nvme in two > weeks. Having conflicting trees is a problem. If there's a dependency for two trees for some new work, then just have a separate branch that's built on those two. For NVMe core work, then it should include the pending NVMe changes. -- Jens Axboe
Re: linux-next: manual merge of the block tree with the rdma tree
On Tue, Jun 02, 2020 at 01:02:55PM -0600, Jens Axboe wrote: > On 6/2/20 1:01 PM, Jason Gunthorpe wrote: > > On Tue, Jun 02, 2020 at 11:37:26AM +0300, Max Gurtovoy wrote: > >> > >> On 6/2/2020 5:56 AM, Stephen Rothwell wrote: > >>> Hi all, > >> > >> Hi, > >> > >> This looks good to me. > >> > >> Can you share a pointer to the tree so we'll test it in our labs ? > >> > >> need to re-test: > >> > >> 1. srq per core > >> > >> 2. srq per core + T10-PI > >> > >> And both will run with shared CQ. > > > > Max, this is too much conflict to send to Linus between your own > > patches. I am going to drop the nvme part of this from RDMA. > > > > Normally I don't like applying partial series, but due to this tree > > split, you can send the rebased nvme part through the nvme/block tree > > at rc1 in two weeks.. > > Was going to comment that this is probably how it should have been > done to begin with. If we have multiple conflicts like that between > two trees, someone is doing something wrong... Well, on the other hand having people add APIs in one tree and then (promised) consumers in another tree later on has proven problematic in the past. It is best to try to avoid that, but in this case I don't think Max will have any delay to get the API consumer into nvme in two weeks. Jason
Re: linux-next: manual merge of the block tree with the rdma tree
On Tue, Jun 02, 2020 at 11:37:26AM +0300, Max Gurtovoy wrote: > > On 6/2/2020 5:56 AM, Stephen Rothwell wrote: > > Hi all, > > Hi, > > This looks good to me. > > Can you share a pointer to the tree so we'll test it in our labs ? > > need to re-test: > > 1. srq per core > > 2. srq per core + T10-PI > > And both will run with shared CQ. Max, this is too much conflict to send to Linus between your own patches. I am going to drop the nvme part of this from RDMA. Normally I don't like applying partial series, but due to this tree split, you can send the rebased nvme part through the nvme/block tree at rc1 in two weeks.. Jason
Re: linux-next: manual merge of the block tree with the rdma tree
On 6/2/20 1:01 PM, Jason Gunthorpe wrote: > On Tue, Jun 02, 2020 at 11:37:26AM +0300, Max Gurtovoy wrote: >> >> On 6/2/2020 5:56 AM, Stephen Rothwell wrote: >>> Hi all, >> >> Hi, >> >> This looks good to me. >> >> Can you share a pointer to the tree so we'll test it in our labs ? >> >> need to re-test: >> >> 1. srq per core >> >> 2. srq per core + T10-PI >> >> And both will run with shared CQ. > > Max, this is too much conflict to send to Linus between your own > patches. I am going to drop the nvme part of this from RDMA. > > Normally I don't like applying partial series, but due to this tree > split, you can send the rebased nvme part through the nvme/block tree > at rc1 in two weeks.. Was going to comment that this is probably how it should have been done to begin with. If we have multiple conflicts like that between two trees, someone is doing something wrong... -- Jens Axboe
Re: linux-next: manual merge of the block tree with the rdma tree
Hi Max, On Tue, 2 Jun 2020 11:37:26 +0300 Max Gurtovoy wrote: > > On 6/2/2020 5:56 AM, Stephen Rothwell wrote: > > This looks good to me. > > Can you share a pointer to the tree so we'll test it in our labs ? git://git.kernel.org/pub/scm/linux/kernel/git/next/linux-next.git you want tag next-20200602 or if you just want that trees that conflicted, then block: git://git.kernel.dk/linux-block.git branch for-next rdma: git://git.kernel.org/pub/scm/linux/kernel/git/rdma/rdma.git branch for-next -- Cheers, Stephen Rothwell pgpuJx8Q8mjVY.pgp Description: OpenPGP digital signature
Re: linux-next: manual merge of the block tree with the rdma tree
On 6/2/2020 5:56 AM, Stephen Rothwell wrote: Hi all, Hi, This looks good to me. Can you share a pointer to the tree so we'll test it in our labs ? need to re-test: 1. srq per core 2. srq per core + T10-PI And both will run with shared CQ. Today's linux-next merge of the block tree got a conflict in: drivers/nvme/target/rdma.c between commit: 5733111dcd97 ("nvmet-rdma: use new shared CQ mechanism") from the rdma tree and commits: b0012dd39715 ("nvmet-rdma: use SRQ per completion vector") b09160c3996c ("nvmet-rdma: add metadata/T10-PI support") from the block tree. I fixed it up (see below) and can carry the fix as necessary. This is now fixed as far as linux-next is concerned, but any non trivial conflicts should be mentioned to your upstream maintainer when your tree is submitted for merging. You may also want to consider cooperating with the maintainer of the conflicting tree to minimise any particularly complex conflicts.
linux-next: manual merge of the block tree with the rdma tree
Hi all, Today's linux-next merge of the block tree got a conflict in: drivers/nvme/target/rdma.c between commit: 5733111dcd97 ("nvmet-rdma: use new shared CQ mechanism") from the rdma tree and commits: b0012dd39715 ("nvmet-rdma: use SRQ per completion vector") b09160c3996c ("nvmet-rdma: add metadata/T10-PI support") from the block tree. I fixed it up (see below) and can carry the fix as necessary. This is now fixed as far as linux-next is concerned, but any non trivial conflicts should be mentioned to your upstream maintainer when your tree is submitted for merging. You may also want to consider cooperating with the maintainer of the conflicting tree to minimise any particularly complex conflicts. -- Cheers, Stephen Rothwell diff --cc drivers/nvme/target/rdma.c index 2405db8bd855,d5141780592e.. --- a/drivers/nvme/target/rdma.c +++ b/drivers/nvme/target/rdma.c @@@ -589,7 -751,8 +752,8 @@@ static void nvmet_rdma_read_data_done(s { struct nvmet_rdma_rsp *rsp = container_of(wc->wr_cqe, struct nvmet_rdma_rsp, read_cqe); - struct nvmet_rdma_queue *queue = cq->cq_context; + struct nvmet_rdma_queue *queue = wc->qp->qp_context; + u16 status = 0; WARN_ON(rsp->n_rdma <= 0); atomic_add(rsp->n_rdma, &queue->sq_wr_avail); @@@ -996,8 -1257,9 +1258,8 @@@ static int nvmet_rdma_create_queue_ib(s */ nr_cqe = queue->recv_queue_size + 2 * queue->send_queue_size; - queue->cq = ib_cq_pool_get(ndev->device, nr_cqe + 1, comp_vector, - queue->cq = ib_alloc_cq(ndev->device, queue, - nr_cqe + 1, queue->comp_vector, - IB_POLL_WORKQUEUE); ++ queue->cq = ib_cq_pool_get(ndev->device, nr_cqe + 1, queue->comp_vector, + IB_POLL_WORKQUEUE); if (IS_ERR(queue->cq)) { ret = PTR_ERR(queue->cq); pr_err("failed to create CQ cqe= %d ret= %d\n", pgpfGiZ38EPyU.pgp Description: OpenPGP digital signature
linux-next: manual merge of the block tree with the rdma tree
Hi all, Today's linux-next merge of the block tree got a conflict in: drivers/nvme/host/rdma.c between commit: 583f69304b91 ("nvme-rdma: use new shared CQ mechanism") from the rdma tree and commit: 5ec5d3bddc6b ("nvme-rdma: add metadata/T10-PI support") from the block tree. I fixed it up (see below) and can carry the fix as necessary. This is now fixed as far as linux-next is concerned, but any non trivial conflicts should be mentioned to your upstream maintainer when your tree is submitted for merging. You may also want to consider cooperating with the maintainer of the conflicting tree to minimise any particularly complex conflicts. -- Cheers, Stephen Rothwell diff --cc drivers/nvme/host/rdma.c index 83d5f292c937,f8f856dc0c67.. --- a/drivers/nvme/host/rdma.c +++ b/drivers/nvme/host/rdma.c @@@ -85,7 -95,7 +95,8 @@@ struct nvme_rdma_queue struct rdma_cm_id *cm_id; int cm_error; struct completion cm_done; + int cq_size; + boolpi_support; }; struct nvme_rdma_ctrl { @@@ -262,7 -272,8 +273,9 @@@ static int nvme_rdma_create_qp(struct n init_attr.qp_type = IB_QPT_RC; init_attr.send_cq = queue->ib_cq; init_attr.recv_cq = queue->ib_cq; + init_attr.qp_context = queue; + if (queue->pi_support) + init_attr.create_flags |= IB_QP_CREATE_INTEGRITY_EN; ret = rdma_create_qp(queue->cm_id, dev->pd, &init_attr); @@@ -426,43 -437,18 +447,49 @@@ static void nvme_rdma_destroy_queue_ib( nvme_rdma_dev_put(dev); } - static int nvme_rdma_get_max_fr_pages(struct ib_device *ibdev) + static int nvme_rdma_get_max_fr_pages(struct ib_device *ibdev, bool pi_support) { - return min_t(u32, NVME_RDMA_MAX_SEGMENTS, -ibdev->attrs.max_fast_reg_page_list_len - 1); + u32 max_page_list_len; + + if (pi_support) + max_page_list_len = ibdev->attrs.max_pi_fast_reg_page_list_len; + else + max_page_list_len = ibdev->attrs.max_fast_reg_page_list_len; + + return min_t(u32, NVME_RDMA_MAX_SEGMENTS, max_page_list_len - 1); } +static int nvme_rdma_create_cq(struct ib_device *ibdev, + struct nvme_rdma_queue *queue) +{ + int ret, comp_vector, idx = nvme_rdma_queue_idx(queue); + enum ib_poll_context poll_ctx; + + /* + * Spread I/O queues completion vectors according their queue index. + * Admin queues can always go on completion vector 0. + */ + comp_vector = idx == 0 ? idx : idx - 1; + + /* Polling queues need direct cq polling context */ + if (nvme_rdma_poll_queue(queue)) { + poll_ctx = IB_POLL_DIRECT; + queue->ib_cq = ib_alloc_cq(ibdev, queue, queue->cq_size, + comp_vector, poll_ctx); + } else { + poll_ctx = IB_POLL_SOFTIRQ; + queue->ib_cq = ib_cq_pool_get(ibdev, queue->cq_size, +comp_vector, poll_ctx); + } + + if (IS_ERR(queue->ib_cq)) { + ret = PTR_ERR(queue->ib_cq); + return ret; + } + + return 0; +} + static int nvme_rdma_create_queue_ib(struct nvme_rdma_queue *queue) { struct ib_device *ibdev; pgpi90m_4wLA0.pgp Description: OpenPGP digital signature
Re: linux-next: manual merge of the block tree with the rdma tree
On Wed, Aug 15, 2018 at 11:45:39AM +1000, Stephen Rothwell wrote: > Hi all, > > On Thu, 26 Jul 2018 13:58:04 +1000 Stephen Rothwell > wrote: > > > > Today's linux-next merge of the block tree got a conflict in: > > > > drivers/nvme/target/rdma.c > > > > between commit: > > > > 23f96d1f15a7 ("nvmet-rdma: Simplify ib_post_(send|recv|srq_recv)() calls") > > 202093848cac ("nvmet-rdma: add an error flow for post_recv failures") > > > > from the rdma tree and commits: > > > > 2fc464e2162c ("nvmet-rdma: add unlikely check in the fast path") > > > > from the block tree. > > > > I fixed it up (see below) and can carry the fix as necessary. This > > is now fixed as far as linux-next is concerned, but any non trivial > > conflicts should be mentioned to your upstream maintainer when your tree > > is submitted for merging. You may also want to consider cooperating > > with the maintainer of the conflicting tree to minimise any particularly > > complex conflicts. > > > > This is now a conflict between Linus' tree and the rdma tree. Yes, I expect this.. good thing we had linux-next as several of these needed non-obvious changes. I keep track of your postings and build a conflict resolution for Linus to refer to. netdev is the last conflicting tree I expect, and it hasn't been sent yet.. Thanks, Jason
Re: linux-next: manual merge of the block tree with the rdma tree
Hi all, On Thu, 26 Jul 2018 13:58:04 +1000 Stephen Rothwell wrote: > > Today's linux-next merge of the block tree got a conflict in: > > drivers/nvme/target/rdma.c > > between commit: > > 23f96d1f15a7 ("nvmet-rdma: Simplify ib_post_(send|recv|srq_recv)() calls") > 202093848cac ("nvmet-rdma: add an error flow for post_recv failures") > > from the rdma tree and commits: > > 2fc464e2162c ("nvmet-rdma: add unlikely check in the fast path") > > from the block tree. > > I fixed it up (see below) and can carry the fix as necessary. This > is now fixed as far as linux-next is concerned, but any non trivial > conflicts should be mentioned to your upstream maintainer when your tree > is submitted for merging. You may also want to consider cooperating > with the maintainer of the conflicting tree to minimise any particularly > complex conflicts. > > -- > Cheers, > Stephen Rothwell > > diff --cc drivers/nvme/target/rdma.c > index 1a642e214a4c,e7f43d1e1779.. > --- a/drivers/nvme/target/rdma.c > +++ b/drivers/nvme/target/rdma.c > @@@ -382,13 -435,22 +435,21 @@@ static void nvmet_rdma_free_rsps(struc > static int nvmet_rdma_post_recv(struct nvmet_rdma_device *ndev, > struct nvmet_rdma_cmd *cmd) > { > -struct ib_recv_wr *bad_wr; > + int ret; > + > ib_dma_sync_single_for_device(ndev->device, > cmd->sge[0].addr, cmd->sge[0].length, > DMA_FROM_DEVICE); > > if (ndev->srq) > - return ib_post_srq_recv(ndev->srq, &cmd->wr, NULL); > - return ib_post_recv(cmd->queue->cm_id->qp, &cmd->wr, NULL); > -ret = ib_post_srq_recv(ndev->srq, &cmd->wr, &bad_wr); > ++ret = ib_post_srq_recv(ndev->srq, &cmd->wr, NULL); > + else > -ret = ib_post_recv(cmd->queue->cm_id->qp, &cmd->wr, &bad_wr); > ++ret = ib_post_recv(cmd->queue->cm_id->qp, &cmd->wr, NULL); > + > + if (unlikely(ret)) > + pr_err("post_recv cmd failed\n"); > + > + return ret; > } > > static void nvmet_rdma_process_wr_wait_list(struct nvmet_rdma_queue *queue) > @@@ -491,7 -553,7 +552,7 @@@ static void nvmet_rdma_queue_response(s > rsp->send_sge.addr, rsp->send_sge.length, > DMA_TO_DEVICE); > > - if (ib_post_send(cm_id->qp, first_wr, NULL)) { > -if (unlikely(ib_post_send(cm_id->qp, first_wr, &bad_wr))) { > ++if (unlikely(ib_post_send(cm_id->qp, first_wr, NULL))) { > pr_err("sending cmd response failed\n"); > nvmet_rdma_release_rsp(rsp); > } This is now a conflict between Linus' tree and the rdma tree. -- Cheers, Stephen Rothwell pgpIIARNDrEL8.pgp Description: OpenPGP digital signature
linux-next: manual merge of the block tree with the rdma tree
Hi all, Today's linux-next merge of the block tree got a conflict in: drivers/nvme/target/rdma.c between commit: 23f96d1f15a7 ("nvmet-rdma: Simplify ib_post_(send|recv|srq_recv)() calls") 202093848cac ("nvmet-rdma: add an error flow for post_recv failures") from the rdma tree and commits: 2fc464e2162c ("nvmet-rdma: add unlikely check in the fast path") from the block tree. I fixed it up (see below) and can carry the fix as necessary. This is now fixed as far as linux-next is concerned, but any non trivial conflicts should be mentioned to your upstream maintainer when your tree is submitted for merging. You may also want to consider cooperating with the maintainer of the conflicting tree to minimise any particularly complex conflicts. -- Cheers, Stephen Rothwell diff --cc drivers/nvme/target/rdma.c index 1a642e214a4c,e7f43d1e1779.. --- a/drivers/nvme/target/rdma.c +++ b/drivers/nvme/target/rdma.c @@@ -382,13 -435,22 +435,21 @@@ static void nvmet_rdma_free_rsps(struc static int nvmet_rdma_post_recv(struct nvmet_rdma_device *ndev, struct nvmet_rdma_cmd *cmd) { - struct ib_recv_wr *bad_wr; + int ret; + ib_dma_sync_single_for_device(ndev->device, cmd->sge[0].addr, cmd->sge[0].length, DMA_FROM_DEVICE); if (ndev->srq) - return ib_post_srq_recv(ndev->srq, &cmd->wr, NULL); - return ib_post_recv(cmd->queue->cm_id->qp, &cmd->wr, NULL); - ret = ib_post_srq_recv(ndev->srq, &cmd->wr, &bad_wr); ++ ret = ib_post_srq_recv(ndev->srq, &cmd->wr, NULL); + else - ret = ib_post_recv(cmd->queue->cm_id->qp, &cmd->wr, &bad_wr); ++ ret = ib_post_recv(cmd->queue->cm_id->qp, &cmd->wr, NULL); + + if (unlikely(ret)) + pr_err("post_recv cmd failed\n"); + + return ret; } static void nvmet_rdma_process_wr_wait_list(struct nvmet_rdma_queue *queue) @@@ -491,7 -553,7 +552,7 @@@ static void nvmet_rdma_queue_response(s rsp->send_sge.addr, rsp->send_sge.length, DMA_TO_DEVICE); - if (ib_post_send(cm_id->qp, first_wr, NULL)) { - if (unlikely(ib_post_send(cm_id->qp, first_wr, &bad_wr))) { ++ if (unlikely(ib_post_send(cm_id->qp, first_wr, NULL))) { pr_err("sending cmd response failed\n"); nvmet_rdma_release_rsp(rsp); } pgpCvQwUaa8BX.pgp Description: OpenPGP digital signature