On Jun 15 10:07, John Levon wrote:
> On Wed, Jun 15, 2022 at 10:48:26AM +0200, Klaus Jensen wrote:
> 
> > > By the way, I noticed that the patch never updates the cq's ei_addr 
> > > value. Is
> > > that on purpose?
> > 
> > Yeah, I also mentioned this previously[1] and I still think we need to
> > update the event index. Otherwise (and my testing confirms this), we end
> > up in a situation where the driver skips the mmio, leaving a completion
> > queue entry "in use" on the device until some other completion comes
> > along.
> 
> Hmm, can you expand on this a little bit? We don't touch cq eventidx this in
> SPDK either, on the basis that mmio exits are expensive, and we only ever need
> to look at cq_head when we're checking for room when posting a completion - 
> and
> in that case, we can just look directly at shadow cq_head value.
> 
> Can you clarify the exact circumstance that needs an mmio write when the 
> driver
> updates cq_head?
> 

No, I see, you are correct that not updating the eventidx reduces MMIO
and that we check read the cq head anyway prior to posting completions.
I guess its a perfectly reasonable device-side optimization in this
case. We can safely drop that addition again I think.

> BTW I'm surprised that this patch has just this:
> 
> +static void nvme_update_sq_eventidx(const NvmeSQueue *sq)
> +{
> +    pci_dma_write(&sq->ctrl->parent_obj, sq->ei_addr, &sq->tail,
> +                  sizeof(sq->tail));
> +}
> 
> Isn't this racy against the driver? Compare
> https://github.com/spdk/spdk/blob/master/lib/nvmf/vfio_user.c#L1317
> 
> thanks
> john

QEMU has full memory barriers on dma read/write, so I believe this is
safe?

Attachment: signature.asc
Description: PGP signature

Reply via email to