On Tue, Oct 27, 2015 at 7:21 PM, Joerg Roedel <jroe...@suse.de> wrote: > On Tue, Oct 27, 2015 at 07:13:56PM -0700, Andy Lutomirski wrote: >> On Tue, Oct 27, 2015 at 7:06 PM, Joerg Roedel <jroe...@suse.de> wrote: >> > Hi Andy, >> > >> > On Tue, Oct 27, 2015 at 06:17:09PM -0700, Andy Lutomirski wrote: >> >> From: Andy Lutomirski <l...@amacapital.net> >> >> >> >> virtio_ring currently sends the device (usually a hypervisor) >> >> physical addresses of its I/O buffers. This is okay when DMA >> >> addresses and physical addresses are the same thing, but this isn't >> >> always the case. For example, this never works on Xen guests, and >> >> it is likely to fail if a physical "virtio" device ever ends up >> >> behind an IOMMU or swiotlb. >> > >> > The overall code looks good, but I havn't seen and dma_sync* calls. >> > When swiotlb=force is in use this would break. >> > >> >> + vq->vring.desc[head].addr = cpu_to_virtio64(_vq->vdev, >> >> vring_map_single( >> >> + vq, >> >> + desc, total_sg * sizeof(struct vring_desc), >> >> + DMA_TO_DEVICE)); >> > >> >> Are you talking about a dma_sync call on the descriptor ring itself? >> Isn't dma_alloc_coherent supposed to make that unnecessary? I should >> move the allocation into the virtqueue code. >> >> The docs suggest that I might need to "flush the processor's write >> buffers before telling devices to read that memory". I'm not sure how >> to do that. > > The write buffers should be flushed by the dma-api functions if > necessary. For dma_alloc_coherent allocations you don't need to call > dma_sync*, but for the map_single/map_page/map_sg ones, as these might > be bounce-buffered.
I think that all the necessary barriers are already there. I had a nasty bug that swiotlb=force exposed, though, which I've fixed. --Andy -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/