If the swiotlb maps a multi-slab region, swiotlb_sync_single_range() can be invoked to sync a sub-region which does not include the first slab. Unfortunately io_tlb_orig_addr[] is only initialised for the first slab, and hence the call to sync_single() will read a garbage orig_addr in this case.
This patch fixes the issue by initialising all mapped slabs in io_tlb_orig_addr[]. It also correctly adjusts the buffer pointer in sync_single() to handle the case that the given dma_addr is not aligned on a slab boundary. Signed-off-by: Keir Fraser <[EMAIL PROTECTED]> (Please Cc me in any response: I am not subscribed to lkml) diff -urpN linux-2.6.22/lib/swiotlb.c linux-2.6.22-new/lib/swiotlb.c --- linux-2.6.22/lib/swiotlb.c 2007-07-10 11:38:33.000000000 +0100 +++ linux-2.6.22-new/lib/swiotlb.c 2007-07-10 11:48:44.000000000 +0100 @@ -357,7 +357,8 @@ map_single(struct device *hwdev, char *b * This is needed when we sync the memory. Then we sync the buffer if * needed. */ - io_tlb_orig_addr[index] = buffer; + for (i = 0; i < nslots; i++) + io_tlb_orig_addr[index+i] = buffer + (i << IO_TLB_SHIFT); if (dir == DMA_TO_DEVICE || dir == DMA_BIDIRECTIONAL) memcpy(dma_addr, buffer, size); @@ -418,6 +419,8 @@ sync_single(struct device *hwdev, char * int index = (dma_addr - io_tlb_start) >> IO_TLB_SHIFT; char *buffer = io_tlb_orig_addr[index]; + buffer += ((unsigned long)dma_addr & ((1 << IO_TLB_SHIFT) - 1)); + switch (target) { case SYNC_FOR_CPU: if (likely(dir == DMA_FROM_DEVICE || dir == DMA_BIDIRECTIONAL)) - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/