Howard Chu writes: > Lee Revell wrote: > > On Sat, 2005-08-20 at 11:38 -0700, Howard Chu wrote: > > > But I also found that I needed to add a new yield(), to work around > > > yet another unexpected issue on this system - we have a number of > > > threads waiting on a condition variable, and the thread holding the > > > mutex signals the var, unlocks the mutex, and then immediately > > > relocks it. The expectation here is that upon unlocking the mutex, > > > the calling thread would block while some waiting thread (that just > > > got signaled) would get to run. In fact what happened is that the > > > calling thread unlocked and relocked the mutex without allowing any > > > of the waiting threads to run. In this case the only solution was > > > to insert a yield() after the mutex_unlock(). > > > > That's exactly the behavior I would expect. Why would you expect > > unlocking a mutex to cause a reschedule, if the calling thread still > > has timeslice left? > > That's beside the point. Folks are making an assertion that > sched_yield() is meaningless; this example demonstrates that there are > cases where sched_yield() is essential.
It is not essential, it is non-portable. Code you described is based on non-portable "expectations" about thread scheduling. Linux implementation of pthreads fails to satisfy them. Perfectly reasonable. Code is then "fixed" by adding sched_yield() calls and introducing more non-portable assumptions. Again, there is no guarantee this would work on any compliant implementation. While "intuitive" semantics of sched_yield() is to yield CPU and to give other runnable threads their chance to run, this is _not_ what standard prescribes (for non-RT threads). > > -- > -- Howard Chu Nikita. - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/