They rather fundamentally break the entire concept of zero copy, so if an exporter manages to hand these out things will break all over.
Luckily there's not too many case that use swiotlb_sync_single_for_device/cpu(): - The generic iommu dma-api code in drivers/iommu/dma-iommu.c. We can catch that with sg_dma_is_swiotlb() reliably. - The generic direct dma code in kernel/dma/direct.c. We can mostly catch that with looking for a NULL dma_ops, except for some powerpc special cases. - Xen, which I don't bother to catch here. Implement these checks in dma_buf_map_attachment when CONFIG_DMA_API_DEBUG is enabled. Signed-off-by: Daniel Vetter <daniel.vet...@intel.com> Cc: Sumit Semwal <sumit.sem...@linaro.org> Cc: "Christian König" <christian.koe...@amd.com> Cc: linux-me...@vger.kernel.org Cc: linaro-mm-...@lists.linaro.org Cc: Paul Cercueil <p...@crapouillou.net> --- Entirely untested, but since I sent the mail with the idea I figured I might as well type it up after I realized there's a lot fewer cases to check. That is, if I haven't completely misread the dma-api and swiotlb code. -Sima --- drivers/dma-buf/dma-buf.c | 17 +++++++++++++++++ 1 file changed, 17 insertions(+) diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c index d1e7f823fbdb..d6f95523f995 100644 --- a/drivers/dma-buf/dma-buf.c +++ b/drivers/dma-buf/dma-buf.c @@ -28,6 +28,12 @@ #include <linux/mount.h> #include <linux/pseudo_fs.h> +#ifdef CONFIG_DMA_API_DEBUG +#include <linux/dma-direct.h> +#include <linux/dma-map-ops.h> +#include <linux/swiotlb.h> +#endif + #include <uapi/linux/dma-buf.h> #include <uapi/linux/magic.h> @@ -1149,10 +1155,13 @@ struct sg_table *dma_buf_map_attachment(struct dma_buf_attachment *attach, #ifdef CONFIG_DMA_API_DEBUG if (!IS_ERR(sg_table)) { struct scatterlist *sg; + struct device *dev = attach->dev; u64 addr; int len; int i; + bool is_direct_dma = !get_dma_ops(dev); + for_each_sgtable_dma_sg(sg_table, sg, i) { addr = sg_dma_address(sg); len = sg_dma_len(sg); @@ -1160,7 +1169,15 @@ struct sg_table *dma_buf_map_attachment(struct dma_buf_attachment *attach, pr_debug("%s: addr %llx or len %x is not page aligned!\n", __func__, addr, len); } + + if (is_direct_dma) { + phys_addr_t paddr = dma_to_phys(dev, addr); + + WARN_ON_ONCE(is_swiotlb_buffer(dev, paddr)); + } } + + WARN_ON_ONCE(sg_dma_is_swiotlb(sg)); } #endif /* CONFIG_DMA_API_DEBUG */ return sg_table; -- 2.43.0