On 01/28/2014 07:20 PM, Andi Kleen wrote:
So the 1-2 threads case is the standard case on a small
system, isn't it? This may well cause regressions.
Yes, it is possible that in a lightly contended case, the queue spinlock
maybe a bit slower because of the slowpath overhead. I observed some
slight slowdown in some of the lightly contended workloads. I will run
more test in a smaller 2-socket system or even a 1-socket system to see
if there is observed regression.
In the extremely unlikely case that all the queue node entries are
used up, the current code will fall back to busy spinning without
waiting in a queue with warning message.
Traditionally we had some code which could take thousands
of locks in rare cases (e.g. all locks in a hash table or all locks of
a big reader lock)
The biggest offender was the mm for changing mmu
notifiers, but I believe that's a mutex now.
lglocks presumably still can do it on large enough
systems. I wouldn't be surprised if there is
other code which e.g. make take all locks in a table.
I don't think the warning is valid and will
likely trigger in some obscure cases.
-Andi
As explained by George, the queue node is only needed when the thread is
waiting to acquire the lock. Once it gets the lock, the node can be
released and be reused.
-Longman
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/