> Date: Mon, 11 Dec 2017 11:13:16 -0500 (EST)
> From: Mouse <mo...@rodents-montreal.org>
> 
> > If read does
> 
> >     while (sc->sc_foo < sc->sc_bar)
> >             cv_wait(&sc->sc_cv, &sc->sc_lock);
> 
> > then whoever changes sc_foo or sc_bar might test whether they changed
> > from sc->sc_foo < sc->sc_bar to !(sc->sc_foo < sc->sc_bar) before
> > they cv_broadcast.
> 
> ...then you have the same test in two places, leaving them vulnerable
> to one but not the other being changed.  (Of course, only some changes
> will break functionality, but that's a separate issue.)
> 
> Furthermore, they and LPT_RF_WAITING are testing/describing different
> things: "is anyone actually waiting?" versus "if anyone does happen to
> be waiting, could this make a difference?".

Correct.

Your responsibility as a user of condvar(9) is to make sure that if
you ever make a difference for a waiter, then you wake.  That's why I
advise either issuing wakeups for any changes to sc_foo/bar, or if
you're going to conditionalize them to avoid unnecessary wakeups, then
conditionalizing them only on changes to sc_foo/bar that transition
from a state that blocks waiters to a state that doesn't block
waiters.

On the other side, condvar(9) handles skipping some potentially
expensive logic if there isn't actually anyone waiting.  You shouldn't
worry about testing for the presence of waiters unless you actually
observe a performance impact.  That's why I advise _against_ using
LPT_RF_WAITING.

(One exception: It's OK to KASSERT(!cv_has_waiters(cv)) in cases when
you need to be sure there are no waiters.  Aside: Maybe cv_destroy
should do this itself anyway.)

> > In any case, neither __insn_barrier nor volatile is sufficient in the
> > multiprocessor model,
> 
> Not sufficient, true, but necessary (though, depending on the
> implementations of things like C and mutexes, possibly implicitly
> provided).

mutex_enter/exit guarantees the appropriate memory barriers to make
the mutual exclusion sections for a single lock object appear globally
ordered on all CPUs, in thread context and in interrupt context.

Reply via email to