The CPU may only access DMA mapped memory if ownership has been
transferred back to the CPU using dma_sync_{single,sg}_to_cpu, and then
before the device can access it again ownership needs to be transferred
back to the device using dma_sync_{single,sg}_to_device.

> I've run some testing, and this patch does indeed fix the crash in
> dma_sync_sg_for_cpu when it tried to use the 0 dma_address from the sg
> list.
> 
> Tested-by: Ørjan Eide <orjan.e...@arm.com>
> 
> I tested this on an older kernel, v4.14, since the dma-mapping code
> moved, in v4.19, to ignore the dma_address and instead use sg_phys() to
> get a valid address from the page, which is always valid in the ion sg
> lists. While this wouldn't crash on newer kernels, it's still good to
> avoid the unnecessary work when no CMO is needed.

Can you also test is with CONFIG_DMA_API_DEBUG enabled, as that should
catch all the usual mistakes in DMA API usage, including the one found?

Reply via email to