On Mon, 2009-01-12 at 18:13 +0200, Avi Kivity wrote:

> One thing that worries me here is that the spinners will spin on a 
> memory location in struct mutex, which means that the cacheline holding 
> the mutex (which is likely to be under write activity from the owner) 
> will be continuously shared by the spinners, slowing the owner down when 
> it needs to unshare it.  One way out of this is to spin on a location in 
> struct mutex_waiter, and have the mutex owner touch it when it schedules 
> out.

Yeah, that is what pure MCS locks do -- however I don't think its a
feasible strategy for this spin/sleep hybrid.

> So:
> - each task_struct has an array of currently owned mutexes, appended to 
> by mutex_lock()

That's not going to fly I think. Lockdep does this but its very
expensive and has some issues. We're currently at 48 max owners, and
still some code paths manage to exceed that.

> - mutex waiters spin on mutex_waiter.wait, which they initialize to zero
> - when switching out of a task, walk the mutex list, and for each mutex, 
> bump each waiter's wait variable, and clear the owner array

Which is O(n).

> - when unlocking a mutex, bump the nearest waiter's wait variable, and 
> remove from the owner array
> 
> Something similar might be done to spinlocks to reduce cacheline 
> contention from spinners and the owner.

Spinlocks can use 'pure' MCS locks.

--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to