On 2016-09-26 14:32:14 [+0200], Peter Zijlstra wrote: > --- a/kernel/futex.c > +++ b/kernel/futex.c > @@ -1374,9 +1374,8 @@ static int wake_futex_pi(u32 __user *uad > * scheduled away before the wake up can take place. > */ > spin_unlock(&hb->lock); > - wake_up_q(&wake_q); > - if (deboost) > - rt_mutex_adjust_prio(current); > + > + rt_mutex_postunlock(&wake_q, deboost);
This breaks -RT. Before that spin_unlock() you do a preempt_disable() which means you had one spinlock with enabled preemption and now you get one unlock with disabled preemption. And this breaks migrate_disable() / enable (because we take the fast path in the in_atomic() case). > return 0; > } > --- a/kernel/locking/rtmutex.c > +++ b/kernel/locking/rtmutex.c > @@ -307,24 +307,6 @@ static void __rt_mutex_adjust_prio(struc > } > > /* > - * Adjust task priority (undo boosting). Called from the exit path of > - * rt_mutex_slowunlock() and rt_mutex_slowlock(). > - * > - * (Note: We do this outside of the protection of lock->wait_lock to > - * allow the lock to be taken while or before we readjust the priority > - * of task. We do not use the spin_xx_mutex() variants here as we are > - * outside of the debug path.) > - */ > -void rt_mutex_adjust_prio(struct task_struct *task) > -{ > - unsigned long flags; > - > - raw_spin_lock_irqsave(&task->pi_lock, flags); > - __rt_mutex_adjust_prio(task); > - raw_spin_unlock_irqrestore(&task->pi_lock, flags); > -} I don't see this function getting back somewhere in this patch. There is one occurrence left in kernel/locking/rtmutex_common.h Sebastian