On 02/18/2014 02:31 AM, Peter Zijlstra wrote:
On Mon, Feb 17, 2014 at 02:47:03PM -0800, H. Peter Anvin wrote:
On 02/17/2014 12:41 PM, Waiman Long wrote:
v3->v4:
  - Remove debugging code and fix a configuration error
  - Simplify the qspinlock structure and streamline the code to make it
    perform a bit better
  - Add an x86 version of asm/qspinlock.h for holding x86 specific
    optimization.
  - Add an optimized x86 code path for 2 contending tasks to improve
    low contention performance.

v2->v3:
  - Simplify the code by using numerous mode only without an unfair option.
  - Use the latest smp_load_acquire()/smp_store_release() barriers.
  - Move the queue spinlock code to kernel/locking.
  - Make the use of queue spinlock the default for x86-64 without user
    configuration.
  - Additional performance tuning.

v1->v2:
  - Add some more comments to document what the code does.
  - Add a numerous CPU mode to support>= 16K CPUs
  - Add a configuration option to allow lock stealing which can further
    improve performance in many cases.
  - Enable wakeup of queue head CPU at unlock time for non-numerous
    CPU mode.

This patch set introduces a queue-based spinlock implementation that
can replace the default ticket spinlock without increasing the size
of the spinlock data structure. As a result, critical kernel data
structures that embed spinlock won't increase in size and breaking
data alignments.

This is starting to look good, so I have pulled it into
tip:x86/spinlocks to start give it some testing mileage.
It very much needs paravirt muck before we can even consider it.

I will start looking at how to make it work with paravirt. Hopefully, it won't take too long.

-Longman
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to