> -----Original Message----- > From: Ni, Ruiyu > Sent: Tuesday, October 23, 2018 4:54 PM > To: Wu, Hao A; edk2-devel@lists.01.org > Cc: Yao, Jiewen; Zeng, Star > Subject: Re: [PATCH v1 3/3] MdeModulePkg/NvmExpressDxe: Refine PassThru > IO queue creation behavior > > On 10/18/2018 2:42 PM, Hao Wu wrote: > > REF:https://bugzilla.tianocore.org/show_bug.cgi?id=1260 > > > > For the PassThru() service of NVM Express Pass Through Protocol, the > > current implementation (function NvmExpressPassThru()) will only use the > > IO Completion/Submission queues created internally by this driver during > > the controller initialization process. Any other IO queues created will > > not be consumed. > > > > So the value is little to accept external IO Completion/Submission queue > > creation request. This commit will refine the behavior of function > > NvmExpressPassThru(), it will only accept driver internal IO queue > > creation commands and will return "EFI_UNSUPPORTED" for external ones. > > > > Cc: Jiewen Yao <jiewen....@intel.com> > > Cc: Ruiyu Ni <ruiyu...@intel.com> > > Cc: Star Zeng <star.z...@intel.com> > > Contributed-under: TianoCore Contribution Agreement 1.1 > > Signed-off-by: Hao Wu <hao.a...@intel.com> > > --- > > MdeModulePkg/Bus/Pci/NvmExpressDxe/NvmExpressPassthru.c | 42 > ++++++++++++++++---- > > 1 file changed, 34 insertions(+), 8 deletions(-) > > > > diff --git a/MdeModulePkg/Bus/Pci/NvmExpressDxe/NvmExpressPassthru.c > b/MdeModulePkg/Bus/Pci/NvmExpressDxe/NvmExpressPassthru.c > > index c52e960771..0c550bd52c 100644 > > --- a/MdeModulePkg/Bus/Pci/NvmExpressDxe/NvmExpressPassthru.c > > +++ b/MdeModulePkg/Bus/Pci/NvmExpressDxe/NvmExpressPassthru.c > > @@ -476,6 +476,7 @@ NvmExpressPassThru ( > > UINT32 Data; > > NVME_PASS_THRU_ASYNC_REQ *AsyncRequest; > > EFI_TPL OldTpl; > > + UINT32 CrIoQid; > > > > // > > // check the data fields in Packet parameter. > > @@ -587,14 +588,39 @@ NvmExpressPassThru ( > > } > > > > Sq->Prp[0] = (UINT64)(UINTN)Packet->TransferBuffer; > > - // > > - // If the NVMe cmd has data in or out, then mapping the user buffer to > the PCI controller specific addresses. > > - // Note here we don't handle data buffer for CreateIOSubmitionQueue > and CreateIOCompletionQueue cmds because > > - // these two cmds are special which requires their data buffer must > support simultaneous access by both the > > - // processor and a PCI Bus Master. It's caller's responsbility to ensure > > this. > > - // > > - if (((Sq->Opc & (BIT0 | BIT1)) != 0) && > > - !((Packet->QueueType == NVME_ADMIN_QUEUE) && ((Sq->Opc == > NVME_ADMIN_CRIOCQ_CMD) || (Sq->Opc == NVME_ADMIN_CRIOSQ_CMD)))) > { > > + if ((Packet->QueueType == NVME_ADMIN_QUEUE) && > > + ((Sq->Opc == NVME_ADMIN_CRIOCQ_CMD) || (Sq->Opc == > NVME_ADMIN_CRIOSQ_CMD))) { > > + // > > + // Command Dword 10 should be valid for CreateIOCompletionQueue > and > > + // CreateIOSubmissionQueue commands. > > + // > > + if (!(Packet->NvmeCmd->Flags & CDW10_VALID)) { > > + return EFI_INVALID_PARAMETER; > > + } > > + > > + // > > + // Bits 15:0 of Command Dword 10 is the Queue Identifier (QID) for > > + // CreateIOCompletionQueue and CreateIOSubmissionQueue commands. > > + // > > + CrIoQid = Packet->NvmeCmd->Cdw10 & 0xFFFF; > > + > > + // > > + // Currently, we only use the IO Completion/Submission queues created > internally > > + // by this driver during controller initialization. Any other IO queues > created > > + // will not be consumed here. The value is little to accept external IO > queue > > + // creation requests, so here we will return EFI_UNSUPPORTED for > external IO > > + // queue creation request. > > + // > > + if ((CrIoQid >= NVME_MAX_QUEUES) || > > + ((Sq->Opc == NVME_ADMIN_CRIOCQ_CMD) && (Packet- > >TransferBuffer != Private->CqBufferPciAddr[CrIoQid])) || > > + ((Sq->Opc == NVME_ADMIN_CRIOSQ_CMD) && (Packet- > >TransferBuffer != Private->SqBufferPciAddr[CrIoQid]))) { > > + DEBUG ((DEBUG_ERROR, "NvmExpressPassThru: Does not support > external IO queues creation request.\n")); > > + return EFI_UNSUPPORTED; > > + } > > Does the above check is to make sure only accept IO queues creation > request from NvmeCreateIoCompletionQueue() and > NvmeCreateIoSubmissionQueue()?
Yes. Actually, the above codes (including the "Packet->NvmeCmd->Flags") are to ensure that the IO queues creation request is internal. > > The check is very complex and unnecessary IMO. If we introduce a boolean > field like CreateQueue in the Private structure and set it to TRUE when > calling PassThru() from the above two internal functions, the above > check can be replaced with only one line check: > if (Private->CreateQueue) > > It does introduce a "dirty" flag in Private structure, but the above > complex check can be avoided. > What's your opinion? Yes, I think for this way the code is more simple. I will propose a V2 of the series. Best Regards, Hao Wu > > > + } else if ((Sq->Opc & (BIT0 | BIT1)) != 0) { > > + // > > + // If the NVMe cmd has data in or out, then mapping the user buffer to > the PCI controller specific addresses. > > + // > > if (((Packet->TransferLength != 0) && (Packet->TransferBuffer == > > NULL)) > || > > ((Packet->TransferLength == 0) && (Packet->TransferBuffer != > > NULL))) { > > return EFI_INVALID_PARAMETER; > > > > > -- > Thanks, > Ray _______________________________________________ edk2-devel mailing list edk2-devel@lists.01.org https://lists.01.org/mailman/listinfo/edk2-devel