On 01/05/2017 11:08 AM, Peter Zijlstra wrote: > On Thu, Jan 05, 2017 at 10:55:55AM -0500, Waiman Long wrote: >> What I am saying that we don't need to change spinlock to rtmutex in a >> -RT kernel. Instead, we can use rtqspinlock for this purpose. All the >> sleeping locks will still be converted to rtmutex. > No-no-no.. > >> Conversion of rtmutex does allow forced CPU preemption when there is a >> need for that. What rtqspinlock can provide is voluntary preemption >> where the lock waiters explicitly yield the CPU while waiting for the >> lock. I use the need_resched() to detect if CPU yielding is necessary. >> However, if the CPU was in a preempt disabled region before the >> spin_lock() call, we can't yield the CPU. The only way is to raise its >> priority and try to get the lock ASAP. > And here you've lost your finger because the saw-blade didn't stop in > time.
Well, I lost my virtual fingers all the time;-) This is one way that I learn and become stronger. > RT very fundamentally relies on the spinlock->rtmutex conversion to > allow preempting things when a higher priority task comes along. A > spinlock, of any kind, requires having preemption disabled while holding > the lock. If the critical section is of unbounded latency, you have > unbounded preemption latency and RT is no more. > > Its not about PI on contention, although that helps inversion scenarios. > Its about allowing preemption, which fundamentally requires a sleeping > lock to be used. > > Many of the spinlock sections of mainline are not well behaved in an RT > sense and therefore must not disable preemption. Similar for the IRQ > disable regions and hence we have the whole threaded interrupt stuff. I do make the assumption that spinlock critical sections are behaving well enough. Apparently, that is not a valid assumption. I sent these RFC patches out to see if it was an idea worth pursuing. If not, I can drop these patches. Anyway, thanks for the feedback. Cheers, Longman