On Mon, May 08, 2023 at 12:01:12PM +1000, Rohan McLure wrote:
> Prior to this patch, data races are detectable by KCSAN of the following
> forms:
> 
> [1] Asynchronous calls to mmiowb_set_pending() from an interrupt context
>     or otherwise outside of a critical section
> [2] Interrupted critical sections, where the interrupt will itself
>     acquire a lock
> 
> In case [1], calling context does not need an mmiowb() call to be
> issued, otherwise it would do so itself. Such calls to
> mmiowb_set_pending() are either idempotent or no-ops.
> 
> In case [2], irrespective of when the interrupt occurs, the interrupt
> will acquire and release its locks prior to its return, nesting_count
> will continue balanced. In the worst case, the interrupted critical
> section during a mmiowb_spin_unlock() call observes an mmiowb to be
> pending and afterward is interrupted, leading to an extraneous call to
> mmiowb(). This data race is clearly innocuous.
> 
> Mark all potentially asynchronous memory accesses with READ_ONCE or
> WRITE_ONCE, including increments and decrements to nesting_count. This
> has the effect of removing KCSAN warnings at consumer's callsites.
> 
> Signed-off-by: Rohan McLure <rmcl...@linux.ibm.com>
> Reported-by: Michael Ellerman <m...@ellerman.id.au>
> Reported-by: Gautam Menghani <gautammenghani...@gmail.com>
> Acked-by: Arnd Bergmann <a...@arndb.de>
> ---
>  include/asm-generic/mmiowb.h | 17 +++++++++++------
>  1 file changed, 11 insertions(+), 6 deletions(-)
> 
> diff --git a/include/asm-generic/mmiowb.h b/include/asm-generic/mmiowb.h
> index 5698fca3bf56..0b8b794150db 100644
> --- a/include/asm-generic/mmiowb.h
> +++ b/include/asm-generic/mmiowb.h
> @@ -35,27 +35,32 @@ DECLARE_PER_CPU(struct mmiowb_state, __mmiowb_state);
>  static inline void mmiowb_set_pending(void)
>  {
>       struct mmiowb_state *ms = __mmiowb_state();
> +     u16 nesting_count = READ_ONCE(ms->nesting_count);
>  
> -     if (likely(ms->nesting_count))
> -             ms->mmiowb_pending = ms->nesting_count;
> +     if (likely(nesting_count))
> +             WRITE_ONCE(ms->mmiowb_pending, nesting_count);
>  }
>  
>  static inline void mmiowb_spin_lock(void)
>  {
>       struct mmiowb_state *ms = __mmiowb_state();
> -     ms->nesting_count++;
> +
> +     /* Increment need not be atomic. Nestedness is balanced over 
> interrupts. */
> +     WRITE_ONCE(ms->nesting_count, READ_ONCE(ms->nesting_count) + 1);
>  }
>  
>  static inline void mmiowb_spin_unlock(void)
>  {
>       struct mmiowb_state *ms = __mmiowb_state();
> +     u16 pending = READ_ONCE(ms->mmiowb_pending);
>  
> -     if (unlikely(ms->mmiowb_pending)) {
> -             ms->mmiowb_pending = 0;
> +     WRITE_ONCE(ms->mmiowb_pending, 0);
> +     if (unlikely(pending)) {
>               mmiowb();
>       }
>  
> -     ms->nesting_count--;
> +     /* Decrement need not be atomic. Nestedness is balanced over 
> interrupts. */
> +     WRITE_ONCE(ms->nesting_count, READ_ONCE(ms->nesting_count) - 1);
>  }
>  #else
>  #define mmiowb_set_pending()         do { } while (0)
> 
> -- 
> 2.37.2
> 


Successfully tested for changes in include/asm-generic/mmiowb.h 

Tested-by: Gautam Menghani <gau...@linux.ibm.com>

Reply via email to