On Fri, Feb 22, 2008 at 11:55:45AM -0800, Sven-Thorsten Dietrich wrote:
> 
> On Fri, 2008-02-22 at 11:43 -0800, Paul E. McKenney wrote:
> > On Fri, Feb 22, 2008 at 11:21:14AM -0800, Bill Huey (hui) wrote:
> > > On Fri, Feb 22, 2008 at 11:19 AM, Bill Huey (hui) <[EMAIL PROTECTED]> 
> > > wrote:
> > > >  Yeah, I'm not very keen on having a constant there without some
> > > >  contention instrumentation to see how long the spins are. It would be
> > > >  better to just let it run until either task->on_cpu is off or checking
> > > >  if the "current" in no longer matches the mutex owner for the runqueue
> > > >  in question. At that point, you know the thread isn't running.
> > > >  Spinning on something like that is just a waste of time. It's for that
> > > >  reason that doing in the spin outside of a preempt critical section
> > > >  isn't really needed
> > > 
> > > Excuse me, I meant to say "...isn't a problem".
> > 
> > The fixed-time spins are very useful in cases where the critical section
> > is almost always very short but can sometimes be very long.  In such
> > cases, you would want to spin until either ownership changes or it is
> > apparent that the current critical-section instance will be long.
> > 
> > I believe that there are locks in the Linux kernel that have this
> > "mostly short but sometimes long" hold-time property.
> 
> In regards to this "mostly short but sometimes long" question, 
> for very large SMP systems, running with some profiling enabled, might
> allow the system to adapt to varying workloads and therefore shifting
> lock contention / hold-times. 
> 
> Overall utilization despite the overhead might be lower, but this is
> tbd.
> 
> In high-contention, short-hold time situations, it may even make sense
> to have multiple CPUs with multiple waiters spinning, depending on
> hold-time vs. time to put a waiter to sleep and wake them up.
> 
> The wake-up side could also walk ahead on the queue, and bring up
> spinners from sleeping, so that they are all ready to go when the lock
> flips green for them.
> 
> But in more simple cases, there should be a simple, default timeout
> governed by context switch overhead or as defined by a derived number of
> cache misses, as you suggested.

Governing the timeout by context-switch overhead sounds even better to me.
Really easy to calibrate, and short critical sections are of much shorter
duration than are a context-switch pair.

                                                        Thanx, Paul

> Sven
> 
> >                                                     Thanx, Paul
> > -
> > To unsubscribe from this list: send the line "unsubscribe linux-rt-users" in
> > the body of a message to [EMAIL PROTECTED]
> > More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to