Re: [PATCH v12 00/11] qspinlock: a 4-byte queue spinlock with PV support

2014-10-27 Thread Waiman Long

On 10/24/2014 04:57 AM, Peter Zijlstra wrote:

On Thu, Oct 16, 2014 at 02:10:29PM -0400, Waiman Long wrote:

v11->v12:
  - Based on PeterZ's version of the qspinlock patch
(https://lkml.org/lkml/2014/6/15/63).
  - Incorporated many of the review comments from Konrad Wilk and
Paolo Bonzini.
  - The pvqspinlock code is largely from my previous version with
PeterZ's way of going from queue tail to head and his idea of
using callee saved calls to KVM and XEN codes.

Thanks for taking the time to refresh this.. I would prefer you use a
little more of the PV techniques I outlined in my latest PV patch to
further reduce the overhead of PV enabled kernels on real hardware.

This is an important use case, because distro kernels will have to
enable PV support while their majority of installations will be on
physical hardware.

Other than that I see no reason not to move this forward.


Thanks for reviewing the patch and agree to move forward. Currently, I 
am thinking of separating out a PV and non-PV versions of the lock 
slowpath functions as shown in my previous mail. That should also 
minimize the performance impact on bare metal even more than what can be 
done with the PV techniques used in your patch while not penalizing PV 
performance.


As for the unlock function, if the site patching function can handle all 
the possible call sites of spin_unlock() without disabling function 
inlining, I will be glad to use your way of handing unlock function. 
Otherwise, I will prefer my current approach as it is simpler and more 
easy to understand as well as similar to what has been done in the pv 
ticket spinlock code.


-Longman
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [PATCH v12 00/11] qspinlock: a 4-byte queue spinlock with PV support

2014-10-24 Thread Peter Zijlstra
On Thu, Oct 16, 2014 at 02:10:29PM -0400, Waiman Long wrote:
> v11->v12:
>  - Based on PeterZ's version of the qspinlock patch
>(https://lkml.org/lkml/2014/6/15/63).
>  - Incorporated many of the review comments from Konrad Wilk and
>Paolo Bonzini.
>  - The pvqspinlock code is largely from my previous version with
>PeterZ's way of going from queue tail to head and his idea of
>using callee saved calls to KVM and XEN codes.

Thanks for taking the time to refresh this.. I would prefer you use a
little more of the PV techniques I outlined in my latest PV patch to
further reduce the overhead of PV enabled kernels on real hardware.

This is an important use case, because distro kernels will have to
enable PV support while their majority of installations will be on
physical hardware.

Other than that I see no reason not to move this forward.
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


[PATCH v12 00/11] qspinlock: a 4-byte queue spinlock with PV support

2014-10-16 Thread Waiman Long
v11->v12:
 - Based on PeterZ's version of the qspinlock patch
   (https://lkml.org/lkml/2014/6/15/63).
 - Incorporated many of the review comments from Konrad Wilk and
   Paolo Bonzini.
 - The pvqspinlock code is largely from my previous version with
   PeterZ's way of going from queue tail to head and his idea of
   using callee saved calls to KVM and XEN codes.

v10->v11:
  - Use a simple test-and-set unfair lock to simplify the code,
but performance may suffer a bit for large guest with many CPUs.
  - Take out Raghavendra KT's test results as the unfair lock changes
may render some of his results invalid.
  - Add PV support without increasing the size of the core queue node
structure.
  - Other minor changes to address some of the feedback comments.

v9->v10:
  - Make some minor changes to qspinlock.c to accommodate review feedback.
  - Change author to PeterZ for 2 of the patches.
  - Include Raghavendra KT's test results in patch 18.

v8->v9:
  - Integrate PeterZ's version of the queue spinlock patch with some
modification:
http://lkml.kernel.org/r/20140310154236.038181...@infradead.org
  - Break the more complex patches into smaller ones to ease review effort.
  - Fix a racing condition in the PV qspinlock code.

v7->v8:
  - Remove one unneeded atomic operation from the slowpath, thus
improving performance.
  - Simplify some of the codes and add more comments.
  - Test for X86_FEATURE_HYPERVISOR CPU feature bit to enable/disable
unfair lock.
  - Reduce unfair lock slowpath lock stealing frequency depending
on its distance from the queue head.
  - Add performance data for IvyBridge-EX CPU.

v6->v7:
  - Remove an atomic operation from the 2-task contending code
  - Shorten the names of some macros
  - Make the queue waiter to attempt to steal lock when unfair lock is
enabled.
  - Remove lock holder kick from the PV code and fix a race condition
  - Run the unfair lock & PV code on overcommitted KVM guests to collect
performance data.

v5->v6:
 - Change the optimized 2-task contending code to make it fairer at the
   expense of a bit of performance.
 - Add a patch to support unfair queue spinlock for Xen.
 - Modify the PV qspinlock code to follow what was done in the PV
   ticketlock.
 - Add performance data for the unfair lock as well as the PV
   support code.

v4->v5:
 - Move the optimized 2-task contending code to the generic file to
   enable more architectures to use it without code duplication.
 - Address some of the style-related comments by PeterZ.
 - Allow the use of unfair queue spinlock in a real para-virtualized
   execution environment.
 - Add para-virtualization support to the qspinlock code by ensuring
   that the lock holder and queue head stay alive as much as possible.

v3->v4:
 - Remove debugging code and fix a configuration error
 - Simplify the qspinlock structure and streamline the code to make it
   perform a bit better
 - Add an x86 version of asm/qspinlock.h for holding x86 specific
   optimization.
 - Add an optimized x86 code path for 2 contending tasks to improve
   low contention performance.

v2->v3:
 - Simplify the code by using numerous mode only without an unfair option.
 - Use the latest smp_load_acquire()/smp_store_release() barriers.
 - Move the queue spinlock code to kernel/locking.
 - Make the use of queue spinlock the default for x86-64 without user
   configuration.
 - Additional performance tuning.

v1->v2:
 - Add some more comments to document what the code does.
 - Add a numerous CPU mode to support >= 16K CPUs
 - Add a configuration option to allow lock stealing which can further
   improve performance in many cases.
 - Enable wakeup of queue head CPU at unlock time for non-numerous
   CPU mode.

This patch set has 3 different sections:
 1) Patches 1-6: Introduces a queue-based spinlock implementation that
can replace the default ticket spinlock without increasing the
size of the spinlock data structure. As a result, critical kernel
data structures that embed spinlock won't increase in size and
break data alignments.
 2) Patch 7: Enables the use of unfair lock in a virtual guest. This
can resolve some of the locking related performance issues due to
the fact that the next CPU to get the lock may have been scheduled
out for a period of time.
 3) Patches 8-11: Enable qspinlock para-virtualization support by
halting the waiting CPUs after spinning for a certain amount of
time. The unlock code will detect the a sleeping waiter and wake it
up. This is essentially the same logic as the PV ticketlock code.

The queue spinlock has slightly better performance than the ticket
spinlock in uncontended case. Its performance can be much better
with moderate to heavy contention.  This patch has the potential of
improving the performance of all the workloads that have moderate to
heavy spinlock contention.

The queue spinlock is especially suitable for NUMA machines with
at least 2 sockets. Thoug