On Tue, May 24, 2016 at 10:41:43PM +0200, Manfred Spraul wrote:
> >--- a/net/netfilter/nf_conntrack_core.c
> >+++ b/net/netfilter/nf_conntrack_core.c
> >@@ -74,7 +74,18 @@ void nf_conntrack_lock(spinlock_t *lock)
> >     spin_lock(lock);
> >     while (unlikely(nf_conntrack_locks_all)) {
> >             spin_unlock(lock);
> >+            /*
> >+             * Order the nf_contrack_locks_all load vs the 
> >spin_unlock_wait()
> >+             * loads below, to ensure locks_all is indeed held.
> >+             */
> >+            smp_rmb(); /* spin_lock(locks_all) */
> >             spin_unlock_wait(&nf_conntrack_locks_all_lock);

> I don't understand the comment, and I don't understand why the smp_rmb() is
> required: What do you want to protect against?

I order against the spin_unlock_wait() load of
nf_conntrack_locks_all_lock being done _before_ the
nf_conntrack_locks_all load.

This is possible because the spin_unlock() in between will stop the
nf_conntrack_locks_all load from falling down, but it doesn't stop the
&nf_conntrack_locks_all_lock lock from being done early.

Inverting these two loads could result in not waiting when we should.


        nf_conntrack_all_lock()         nf_conntrack_lock()

                                          [L] all_locks == unlocked
          [S] spin_lock(&all_lock);
          [S] all = true;
                                          [L] all == true

And we'll not wait for the current all_lock section to finish.
Resulting in an all_lock and lock section at the same time.

Reply via email to