On 03/28, Nick Piggin wrote: > > Well with my queued spinlocks, all that lockbreak stuff can just come out > of the spin_lock, break_lock out of the spinlock structure, and > need_lockbreak just becomes (lock->qhead - lock->qtail > 1).
Q: queued spinlocks are not CONFIG_PREEMPT friendly, > + asm volatile(LOCK_PREFIX "xaddw %0, %1\n\t" > + : "+r" (pos), "+m" (lock->qhead) : : "memory"); > + while (unlikely(pos != lock->qtail)) > + cpu_relax(); once we incremented lock->qhead, we have no optiion but should spin with preemption disabled until pos == lock->qtail, yes? Oleg. - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/