The mainline implementation of read_seqbegin() orders prior loads w.r.t.
the read-side critical section.  Fixup the RT writer-boosting
implementation to provide the same guarantee.

Also, while we're here, update the usage of ACCESS_ONCE() to use
READ_ONCE().

Fixes: e69f15cf77c23 ("seqlock: Prevent rt starvation")
Cc: stable...@vger.kernel.org
Signed-off-by: Julia Cartwright <ju...@ni.com>
---
Found during code inspection of the RT seqlock implementation.

   Julia

 include/linux/seqlock.h | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/include/linux/seqlock.h b/include/linux/seqlock.h
index a59751276b94..597ce5a9e013 100644
--- a/include/linux/seqlock.h
+++ b/include/linux/seqlock.h
@@ -453,7 +453,7 @@ static inline unsigned read_seqbegin(seqlock_t *sl)
        unsigned ret;
 
 repeat:
-       ret = ACCESS_ONCE(sl->seqcount.sequence);
+       ret = READ_ONCE(sl->seqcount.sequence);
        if (unlikely(ret & 1)) {
                /*
                 * Take the lock and let the writer proceed (i.e. evtl
@@ -462,6 +462,7 @@ static inline unsigned read_seqbegin(seqlock_t *sl)
                spin_unlock_wait(&sl->lock);
                goto repeat;
        }
+       smp_rmb();
        return ret;
 }
 #endif
-- 
2.16.1

Reply via email to