On Fri, Sep 26, 2008 at 02:32:43PM +0200, Joerg Roedel wrote: > Ok, the allocation only matters for dma_alloc_coherent. Fujita > introduced a generic software-based dma_alloc_coherent recently > which you can use for that. I think implementing PVDMA into an own > dma_ops backend and multiplex it using my patches introduces less > overhead than an additional layer over the current dma_ops > implementation.
I'm not sure what you have in mind, but I agree with Amit that conceptually pvdma should be called after the guest's "native" dma_ops have done their thing. This is not just for nommu, consider a guest that is using an (emulated) hardware IOMMU, or that wants to use swiotlb. We can't replicate their functionality in the pv_dma_ops layer, we have to let them run first and then pass deal with whatever we get back. > Another two questions to your approach: What happens if a > dma_alloc_coherent allocation crosses page boundarys and the gpa's > are not contiguous in host memory? How will dma masks be handled? That's a very good question. The host will need to be aware of a device's DMA capabilities in order to return I/O addresses (which could be hpa's if you don't have an IOMMU) that satisfy them. That's quite a pain. Cheers, Muli -- The First Workshop on I/O Virtualization (WIOV '08) Dec 2008, San Diego, CA, http://www.usenix.org/wiov08/ xxx SYSTOR 2009---The Israeli Experimental Systems Conference http://www.haifa.il.ibm.com/conferences/systor2009/ -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html