On Sun, Dec 28, 2025 at 9:09 AM Leon Romanovsky <[email protected]> wrote:
>
> On Sat, Dec 27, 2025 at 11:52:45AM +1300, Barry Song wrote:
> > From: Barry Song <[email protected]>
> >
> > Instead of performing a flush per SG entry, issue all cache
> > operations first and then flush once. This ultimately benefits
> > __dma_sync_sg_for_cpu() and __dma_sync_sg_for_device().
> >
> > Cc: Leon Romanovsky <[email protected]>
> > Cc: Catalin Marinas <[email protected]>
> > Cc: Will Deacon <[email protected]>
> > Cc: Marek Szyprowski <[email protected]>
> > Cc: Robin Murphy <[email protected]>
> > Cc: Ada Couprie Diaz <[email protected]>
> > Cc: Ard Biesheuvel <[email protected]>
> > Cc: Marc Zyngier <[email protected]>
> > Cc: Anshuman Khandual <[email protected]>
> > Cc: Ryan Roberts <[email protected]>
> > Cc: Suren Baghdasaryan <[email protected]>
> > Cc: Tangquan Zheng <[email protected]>
> > Signed-off-by: Barry Song <[email protected]>
> > ---
> >  kernel/dma/direct.c | 14 +++++++-------
> >  1 file changed, 7 insertions(+), 7 deletions(-)
>
> <...>
>
> > -             if (!dev_is_dma_coherent(dev)) {
> > +             if (!dev_is_dma_coherent(dev))
> >                       arch_sync_dma_for_device(paddr, sg->length,
> >                                       dir);
> > -                     arch_sync_dma_flush();
> > -             }
> >       }
> > +     if (!dev_is_dma_coherent(dev))
> > +             arch_sync_dma_flush();
>
> This patch should be squashed into the previous one. You introduced
> arch_sync_dma_flush() there, and now you are placing it elsewhere.

Hi Leon,

The previous patch replaces all arch_sync_dma_for_* calls with
arch_sync_dma_for_* plus arch_sync_dma_flush(), without any
functional change. The subsequent patches then implement the
actual batching. I feel this is a better approach for reviewing
each change independently. Otherwise, the previous patch would
be too large.

Thanks
Barry

Reply via email to