On 21/12/2016 17:19, Fam Zheng wrote:
> It's clever! It'd be a bit more complicated than that, though. Things like
> queues etc in block/nvme.c have to be preserved, and if we already ensure 
> that,
> ram blocks can be preserved similarly, but indeed bounce buffers can be 
> handled
> that way. I still need to think about how to make sure none of the invalidated
> IOVA addresses are in use by other requests.

Hmm, that's true.  As you said, we'll probably want to split the IOVA
space in two, with a relatively small part for "volatile" addresses.

You can add two counters that track how many requests are using volatile
space.  When it's time to do the VFIO_IOMMU_UNMAP_DMA, you do something
like:

    if (vfio->next_phase == vfio->current_phase) {
        vfio->next_phase = !vfio->current_phase;
        while (vfio->request_counter[vfio->current_phase] != 0) {
            wait on CoQueue
        }
        ioctl(VFIO_IOMMU_UNMAP_DMA)
        vfio->current_phase = vfio->next_phase;
        wake up everyone on CoQueue
    } else {
        /* wait for the unmap to happen */
        while (vfio->next_phase != vfio->current_phase) {
            wait on CoQueue
        }
    }

As an optimization, incrementing/decrementing request_counter can be
delayed until you find an item of the QEMUIOVector that needs a volatile
IOVA.  Then it should never be incremented in practice during guest
execution.

Paolo

> Also I wonder how expensive the huge VFIO_IOMMU_UNMAP_DMA is. In the worst 
> case
> the "throwaway" IOVAs can be limited to a small range.

Reply via email to