On Mon, 2018-07-30 at 18:38 +0200, Christoph Hellwig wrote:
> The requirement to disable local irqs over kmap_atomic is long gone,
> so remove those calls.

Really ? I'm trying to verify that and getting lost in a mess of macros
from hell in the per-cpu stuff but if you look at our implementation
of kmap_atomic_prot(), all it does is a preempt_disable(), and then
it uses kmap_atomic_idx_push():

        int idx = __this_cpu_inc_return(__kmap_atomic_idx) - 1;

Note the use of __this_cpu_inc_return(), not this_cpu_inc_return(),
ie this is the non-interrupt safe version...

Ben.

> Signed-off-by: Christoph Hellwig <h...@lst.de>
> ---
>  arch/powerpc/mm/dma-noncoherent.c | 6 +-----
>  1 file changed, 1 insertion(+), 5 deletions(-)
> 
> diff --git a/arch/powerpc/mm/dma-noncoherent.c 
> b/arch/powerpc/mm/dma-noncoherent.c
> index 382528475433..d1c16456abac 100644
> --- a/arch/powerpc/mm/dma-noncoherent.c
> +++ b/arch/powerpc/mm/dma-noncoherent.c
> @@ -357,12 +357,10 @@ static inline void __dma_sync_page_highmem(struct page 
> *page,
>  {
>       size_t seg_size = min((size_t)(PAGE_SIZE - offset), size);
>       size_t cur_size = seg_size;
> -     unsigned long flags, start, seg_offset = offset;
> +     unsigned long start, seg_offset = offset;
>       int nr_segs = 1 + ((size - seg_size) + PAGE_SIZE - 1)/PAGE_SIZE;
>       int seg_nr = 0;
>  
> -     local_irq_save(flags);
> -
>       do {
>               start = (unsigned long)kmap_atomic(page + seg_nr) + seg_offset;
>  
> @@ -378,8 +376,6 @@ static inline void __dma_sync_page_highmem(struct page 
> *page,
>               cur_size += seg_size;
>               seg_offset = 0;
>       } while (seg_nr < nr_segs);
> -
> -     local_irq_restore(flags);
>  }
>  #endif /* CONFIG_HIGHMEM */
>  

Reply via email to