On 07/08/2014 06:38 AM, Paul E. McKenney wrote:
> From: "Paul E. McKenney" <paul...@linux.vnet.ibm.com>
> 
> The current approach to RCU priority boosting uses an rt_mutex strictly
> for its priority-boosting side effects.  The rt_mutex_init_proxy_locked()
> function is used by the booster to initialize the lock as held by the
> boostee.  The booster then uses rt_mutex_lock() to acquire this rt_mutex,
> which priority-boosts the boostee.  When the boostee reaches the end
> of its outermost RCU read-side critical section, it checks a field in
> its task structure to see whether it has been boosted, and, if so, uses
> rt_mutex_unlock() to release the rt_mutex.  The booster can then go on
> to boost the next task that is blocking the current RCU grace period.
> 
> But reasonable implementations of rt_mutex_unlock() might result in the
> boostee referencing the rt_mutex's data after releasing it. 

XXXX_unlock(lock_ptr) should not reference to the lock_ptr after it has 
unlocked the lock. (*)
So I think this patch is unneeded. Although its adding overhead is at 
slow-patch,
but it adds REVIEW-burden.

And although the original rt_mutex_unlock() violates the rule(*) when the 
fast-cmpxchg-path,
but it is fixed now.

It is the lock-subsystem's responsible to do this. I prefer to add the 
wait_for_complete()
stuff until the future when the boostee needs to re-access the booster after 
rt_mutex_unlock()
instead of now.

Thanks,
Lai
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to