On Wed, Aug 24, 2016 at 03:50:10PM -0400, Waiman Long wrote:
> --- a/kernel/locking/mutex.c
> +++ b/kernel/locking/mutex.c
> @@ -97,6 +97,8 @@ static void __mutex_handoff(struct mutex *lock, struct
> task_st
> for (;;) {
> unsigned long old, new;
>
> + if ((o
On 08/23/2016 04:32 PM, Peter Zijlstra wrote:
On Tue, Aug 23, 2016 at 03:47:53PM -0400, Waiman Long wrote:
On 08/23/2016 08:46 AM, Peter Zijlstra wrote:
N
@@ -573,8 +600,14 @@ __mutex_lock_common(struct mutex *lock,
schedule_preempt_disabled();
spin_lock_mutex(&l
On Tue, Aug 23, 2016 at 03:47:53PM -0400, Waiman Long wrote:
> On 08/23/2016 08:46 AM, Peter Zijlstra wrote:
> >N
> >@@ -573,8 +600,14 @@ __mutex_lock_common(struct mutex *lock,
> > schedule_preempt_disabled();
> > spin_lock_mutex(&lock->wait_lock, flags);
> >
> >+
On Tue, Aug 23, 2016 at 02:46:20PM +0200, Peter Zijlstra wrote:
> @@ -573,8 +600,14 @@ __mutex_lock_common(struct mutex *lock,
> schedule_preempt_disabled();
> spin_lock_mutex(&lock->wait_lock, flags);
>
> + if (__mutex_owner(lock) == current)
> +
Now that we have an atomic owner field, we can do explicit lock
handoff. Use this to avoid starvation.
Signed-off-by: Peter Zijlstra (Intel)
---
kernel/locking/mutex.c | 44
1 file changed, 40 insertions(+), 4 deletions(-)
--- a/kernel/locking/mute
5 matches
Mail list logo