On Thu, Jul 8, 2021 at 10:38 PM Lu Baolu <baolu...@linux.intel.com> wrote: > > Hi David, > > I like this idea. Thanks for proposing this. > > On 2021/7/7 15:55, David Stevens wrote: > > Add support for per-domain dynamic pools of iommu bounce buffers to the > > dma-iommu API. This allows iommu mappings to be reused while still > > maintaining strict iommu protection. Allocating buffers dynamically > > instead of using swiotlb carveouts makes per-domain pools more amenable > > on systems with large numbers of devices or where devices are unknown. > > Have you ever considered leveraging the per-device swiotlb memory pool > added by below series? > > https://lore.kernel.org/linux-iommu/20210625123004.GA3170@willie-the-truck/
I'm not sure if that's a good fit. The swiotlb pools are allocated during device initialization, so they require setting aside the worst-case amount of memory. That's okay if you only use it with a small number of devices where you know in advance approximately how much memory they use. However, it doesn't work as well if you want to use it with a large number of devices, or with unknown (i.e. hotplugged) devices. > > > > When enabled, all non-direct streaming mappings below a configurable > > size will go through bounce buffers. Note that this means drivers which > > don't properly use the DMA API (e.g. i915) cannot use an iommu when this > > feature is enabled. However, all drivers which work with swiotlb=force > > should work. > > If so, why not making it more scalable by adding a callback into vendor > iommu drivers? The vendor iommu drivers have enough information to tell > whether the bounce buffer is feasible for a specific domain. I'm not very familiar with the specifics of VT-d or restrictions with the graphics hardware, but at least on the surface it looks like a limitation of the i915 driver's implementation. The driver uses the DMA_ATTR_SKIP_CPU_SYNC flag, but never calls the dma_sync functions, since things are coherent on x86 hardware. However, bounce buffers violate the driver's assumption that there's no need to sync the CPU and device domain. I doubt there's an inherent limitation of the hardware here, it's just how the driver is implemented. Given that, I don't know if it's something the iommu driver needs to handle. One potential way this could be addressed would be to add explicit support to the DMA API for long-lived streaming mappings. Drivers can get that behavior today via DMA_ATTR_SKIP_CPU_SYNC and dma_sync. However, the DMA API doesn't really have enough information to treat ephemeral and long-lived mappings differently. With a new DMA_ATTR flag for long-lived streaming mappings, the DMA API could skip bounce buffers. That flag could also be used as a performance optimization in the various dma-buf implementations, since they seem to mostly fall into the long-lived streaming category (the handful I checked do call dma_sync, so there isn't a correctness issue). -David > > > > Bounce buffers serve as an optimization in situations where interactions > > with the iommu are very costly. For example, virtio-iommu operations in > > The simulated IOMMU does the same thing. > > It's also an optimization for bare metal in cases where the strict mode > of cache invalidation is used. CPU moving data is faster than IOMMU > cache invalidation if the buffer is small. > > Best regards, > baolu _______________________________________________ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu