On Wed, 2009-01-07 at 11:57 +0800, Lai Jiangshan wrote:
> Peter Zijlstra wrote:
> > +void mutex_spin_or_schedule(struct mutex_waiter *waiter, long state, 
> > unsigned long *flags)
> > +{
> > +   struct mutex *lock = waiter->lock;
> > +   struct task_struct *task = waiter->task;
> > +   struct task_struct *owner = lock->owner;
> > +   struct rq *rq;
> > +
> > +   if (!owner)
> > +           goto do_schedule;
> > +
> > +   rq = task_rq(owner);
> > +
> > +   if (rq->curr != owner) {
> > +do_schedule:
> > +           __set_task_state(task, state);
> > +           spin_unlock_mutex(&lock->wait_lock, *flags);
> > +           schedule();
> > +   } else {
> > +           spin_unlock_mutex(&lock->wait_lock, *flags);
> > +           for (;;) {
> > +                   /* Stop spinning when there's a pending signal. */
> > +                   if (signal_pending_state(state, task))
> > +                           break;
> > +
> > +                   /* Owner changed, bail to revalidate state */
> > +                   if (lock->owner != owner)
> > +                           break;
> > +
> > +                   /* Owner stopped running, bail to revalidate state */
> > +                   if (rq->curr != owner)
> > +                           break;
> > +
> 
> 2 questions from my immature thought:
> 
> 1) Do we need keep gcc from optimizing when we access lock->owner
>    and rq->curr in the loop?

cpu_relax() is a compiler barrier iirc.

> 2) "if (rq->curr != owner)" need become smarter.
>    schedule()
>    {
>       select_next
>       rq->curr = next;
>       contex_swith
>    }
> we also spin when owner is select_next-ing in schedule().
> but select_next is not fast enough.

I'm not sure what you're saying here..
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to