On 2022-03-24 10:25, Oleksandr Natalenko wrote:
Hello.

On čtvrtek 24. března 2022 6:57:32 CET Christoph Hellwig wrote:
On Wed, Mar 23, 2022 at 08:54:08PM +0000, Robin Murphy wrote:
I'll admit I still never quite grasped the reason for also adding the
override to swiotlb_sync_single_for_device() in aa6f8dcbab47, but I think
by that point we were increasingly tired and confused and starting to
second-guess ourselves (well, I was, at least). I don't think it's wrong
per se, but as I said I do think it can bite anyone who's been doing
dma_sync_*() wrong but getting away with it until now. If ddbd89deb7d3
alone turns out to work OK then I'd be inclined to try a partial revert of
just that one hunk.

Agreed.  Let's try that first.

Oleksandr, can you try the patch below:


diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index 6db1c475ec827..6c350555e5a1c 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -701,13 +701,10 @@ void swiotlb_tbl_unmap_single(struct device *dev, 
phys_addr_t tlb_addr,
  void swiotlb_sync_single_for_device(struct device *dev, phys_addr_t tlb_addr,
                size_t size, enum dma_data_direction dir)
  {
-       /*
-        * Unconditional bounce is necessary to avoid corruption on
-        * sync_*_for_cpu or dma_ummap_* when the device didn't overwrite
-        * the whole lengt of the bounce buffer.
-        */
-       swiotlb_bounce(dev, tlb_addr, size, DMA_TO_DEVICE);
-       BUG_ON(!valid_dma_direction(dir));
+       if (dir == DMA_TO_DEVICE || dir == DMA_BIDIRECTIONAL)
+               swiotlb_bounce(dev, tlb_addr, size, DMA_TO_DEVICE);
+       else
+               BUG_ON(dir != DMA_FROM_DEVICE);
  }
void swiotlb_sync_single_for_cpu(struct device *dev, phys_addr_t tlb_addr,


With this patch the AP works for me.

Cool, thanks for confirming. So I think ath9k probably is doing something dodgy with dma_sync_*(), but if Linus prefers to make the above change rather than wait for that to get figured out, I believe that should be fine.

The crucial part of the "rework" patch is that we'll unconditionally initialise the SWIOTLB bounce slot as it's allocated in swiotlb_tbl_map_single(), regardless of DMA_ATTR_SKIP_CPU_SYNC. As long as that happens, we're safe in terms of leaking data from previous mappings, and any possibility for incorrect sync usage to lose newly-written DMA data is at least no worse than it always has been. The most confusion was around how the proposed DMA_ATTR_OVERWRITE attribute would need to interact with DMA_ATTR_SKIP_CPU_SYNC to remain safe but still have any useful advantage, so unless and until anyone wants to revisit that, this should remain comparatively simple to reason about.

Cheers,
Robin.
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

Reply via email to