On Sat, Dec 27, 2025 at 11:52:45AM +1300, Barry Song wrote:
> From: Barry Song <[email protected]>
> 
> Instead of performing a flush per SG entry, issue all cache
> operations first and then flush once. This ultimately benefits
> __dma_sync_sg_for_cpu() and __dma_sync_sg_for_device().
> 
> Cc: Leon Romanovsky <[email protected]>
> Cc: Catalin Marinas <[email protected]>
> Cc: Will Deacon <[email protected]>
> Cc: Marek Szyprowski <[email protected]>
> Cc: Robin Murphy <[email protected]>
> Cc: Ada Couprie Diaz <[email protected]>
> Cc: Ard Biesheuvel <[email protected]>
> Cc: Marc Zyngier <[email protected]>
> Cc: Anshuman Khandual <[email protected]>
> Cc: Ryan Roberts <[email protected]>
> Cc: Suren Baghdasaryan <[email protected]>
> Cc: Tangquan Zheng <[email protected]>
> Signed-off-by: Barry Song <[email protected]>
> ---
>  kernel/dma/direct.c | 14 +++++++-------
>  1 file changed, 7 insertions(+), 7 deletions(-)

<...>

> -             if (!dev_is_dma_coherent(dev)) {
> +             if (!dev_is_dma_coherent(dev))
>                       arch_sync_dma_for_device(paddr, sg->length,
>                                       dir);
> -                     arch_sync_dma_flush();
> -             }
>       }
> +     if (!dev_is_dma_coherent(dev))
> +             arch_sync_dma_flush();

This patch should be squashed into the previous one. You introduced
arch_sync_dma_flush() there, and now you are placing it elsewhere.

Thanks

Reply via email to