Re: [PATCH v9 00/19] qspinlock: a 4-byte queue spinlock with PV support

2014-05-07 Thread Waiman Long

On 04/27/2014 02:09 PM, Raghavendra K T wrote:


For kvm part feel free to add:
Tested-by: Raghavendra K T 

V9 testing has shown no hangs.
I was able to do some performance testing. here are the results:

Overall we are seeing good improvement for pv-unfair version.

System : 32 cpu sandybridge with HT on. (4 node machine with 32 GB each)
Guest: 8GB with 16 vcpu/VM.
Average was taken over 8-10 data points.

Base = 3.15-rc2with PRAVIRT_SPINLOCK = y
A = 3.15-rc2 + qspinlock v9 patch with QUEUE_SPINLOCK = y 
PRAVIRT_SPINLOCK = y PARAVIRT_UNFAIR_LOCKS = y (unfair lock)
B =  3.15-rc2 + qspinlock v9 patch with QUEUE_SPINLOCK = y 
PRAVIRT_SPINLOCK = n PARAVIRT_UNFAIR_LOCKS = n (queue spinlock without 
paravirt)
C = 3.15-rc2 + qspinlock v9 patch with  QUEUE_SPINLOCK = y 
PRAVIRT_SPINLOCK = y  PARAVIRT_UNFAIR_LOCKS = n (queue spinlock with 
paravirt)



Ebizzy % improvements

overcommit ABC
0.5x4.42652.06111.5824
1.0x0.9015-7.7828   4.5443
1.5x46.1162   -2.9845   -3.5046
2.0x99.8150   -2.7116   4.7461

Dbench %improvements
overcommit ABC
0.5x3.26173.54362.5676
1.0x0.63022.23425.2201
1.5x5.00274.82753.8375
2.0x23.8242   4.578212.6067

Absolute values of base results: (overcommit, value, stdev)
Ebizzy ( records / sec with 120 sec run)
0.5x 20941.8750 (2%)
1.0x 17623.8750 (5%)
1.5x  5874.7778 (15%)
2.0x  3581.8750 (7%)

Dbench (throughput in MB/sec)
0.5x 10009.6610 (5%)
1.0x  6583.0538 (1%)
1.5x  3991.9622 (4%)
2.0x  2527.0613 (2.5%)



Thank for the testing. I will include your Test-by tag in the next version.

-Longman
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v9 00/19] qspinlock: a 4-byte queue spinlock with PV support

2014-04-27 Thread Raghavendra K T

On 04/17/2014 08:33 PM, Waiman Long wrote:

v8->v9:
   - Integrate PeterZ's version of the queue spinlock patch with some
 modification:
 http://lkml.kernel.org/r/20140310154236.038181...@infradead.org
   - Break the more complex patches into smaller ones to ease review effort.
   - Fix a racing condition in the PV qspinlock code.

v7->v8:
   - Remove one unneeded atomic operation from the slowpath, thus
 improving performance.
   - Simplify some of the codes and add more comments.
   - Test for X86_FEATURE_HYPERVISOR CPU feature bit to enable/disable
 unfair lock.
   - Reduce unfair lock slowpath lock stealing frequency depending
 on its distance from the queue head.
   - Add performance data for IvyBridge-EX CPU.

v6->v7:
   - Remove an atomic operation from the 2-task contending code
   - Shorten the names of some macros
   - Make the queue waiter to attempt to steal lock when unfair lock is
 enabled.
   - Remove lock holder kick from the PV code and fix a race condition
   - Run the unfair lock & PV code on overcommitted KVM guests to collect
 performance data.

v5->v6:
  - Change the optimized 2-task contending code to make it fairer at the
expense of a bit of performance.
  - Add a patch to support unfair queue spinlock for Xen.
  - Modify the PV qspinlock code to follow what was done in the PV
ticketlock.
  - Add performance data for the unfair lock as well as the PV
support code.

v4->v5:
  - Move the optimized 2-task contending code to the generic file to
enable more architectures to use it without code duplication.
  - Address some of the style-related comments by PeterZ.
  - Allow the use of unfair queue spinlock in a real para-virtualized
execution environment.
  - Add para-virtualization support to the qspinlock code by ensuring
that the lock holder and queue head stay alive as much as possible.

v3->v4:
  - Remove debugging code and fix a configuration error
  - Simplify the qspinlock structure and streamline the code to make it
perform a bit better
  - Add an x86 version of asm/qspinlock.h for holding x86 specific
optimization.
  - Add an optimized x86 code path for 2 contending tasks to improve
low contention performance.

v2->v3:
  - Simplify the code by using numerous mode only without an unfair option.
  - Use the latest smp_load_acquire()/smp_store_release() barriers.
  - Move the queue spinlock code to kernel/locking.
  - Make the use of queue spinlock the default for x86-64 without user
configuration.
  - Additional performance tuning.

v1->v2:
  - Add some more comments to document what the code does.
  - Add a numerous CPU mode to support >= 16K CPUs
  - Add a configuration option to allow lock stealing which can further
improve performance in many cases.
  - Enable wakeup of queue head CPU at unlock time for non-numerous
CPU mode.

This patch set has 3 different sections:
  1) Patches 1-7: Introduces a queue-based spinlock implementation that
 can replace the default ticket spinlock without increasing the
 size of the spinlock data structure. As a result, critical kernel
 data structures that embed spinlock won't increase in size and
 break data alignments.
  2) Patches 8-13: Enables the use of unfair queue spinlock in a
 virtual guest. This can resolve some of the locking related
 performance issues due to the fact that the next CPU to get the
 lock may have been scheduled out for a period of time.
  3) Patches 14-19: Enable qspinlock para-virtualization support
 by halting the waiting CPUs after spinning for a certain amount of
 time. The unlock code will detect the a sleeping waiter and wake it
 up. This is essentially the same logic as the PV ticketlock code.

The queue spinlock has slightly better performance than the ticket
spinlock in uncontended case. Its performance can be much better
with moderate to heavy contention.  This patch has the potential of
improving the performance of all the workloads that have moderate to
heavy spinlock contention.

The queue spinlock is especially suitable for NUMA machines with at
least 2 sockets, though noticeable performance benefit probably won't
show up in machines with less than 4 sockets.

The purpose of this patch set is not to solve any particular spinlock
contention problems. Those need to be solved by refactoring the code
to make more efficient use of the lock or finer granularity ones. The
main purpose is to make the lock contention problems more tolerable
until someone can spend the time and effort to fix them.


For kvm part feel free to add:
Tested-by: Raghavendra K T 

V9 testing has shown no hangs.
I was able to do some performance testing. here are the results:

Overall we are seeing good improvement for pv-unfair version.

System : 32 cpu sandybridge with HT on. (4 node machine with 32 GB each)
Guest: 8GB with 16 vcpu/VM.
Average was taken over 8-10 data points.

Base = 3.15-rc2 with PRAVIRT_SPINLOCK = y
A =

Re: [PATCH v9 00/19] qspinlock: a 4-byte queue spinlock with PV support

2014-04-18 Thread Konrad Rzeszutek Wilk
On Thu, Apr 17, 2014 at 09:48:36PM -0400, Waiman Long wrote:
> On 04/17/2014 01:23 PM, Konrad Rzeszutek Wilk wrote:
> >On Thu, Apr 17, 2014 at 11:03:52AM -0400, Waiman Long wrote:
> >>v8->v9:
> >>   - Integrate PeterZ's version of the queue spinlock patch with some
> >> modification:
> >> http://lkml.kernel.org/r/20140310154236.038181...@infradead.org
> >>   - Break the more complex patches into smaller ones to ease review effort.
> >>   - Fix a racing condition in the PV qspinlock code.
> >I am not seeing anything mentioning that the overcommit scenario
> >for KVM and Xen had been fixed. Or was the 'racing condition' said
> >issue?
> >
> >Thanks.
> 
> The hanging is caused by a racing condition which should be fixed in
> the v9 patch. Please let me know if you are still seeing it.

OK, is there a git tree with these patches to easily slurp them up?


Thanks!
> 
> -Longman
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v9 00/19] qspinlock: a 4-byte queue spinlock with PV support

2014-04-17 Thread Waiman Long

On 04/17/2014 01:40 PM, Raghavendra K T wrote:

On 04/17/2014 10:53 PM, Konrad Rzeszutek Wilk wrote:

On Thu, Apr 17, 2014 at 11:03:52AM -0400, Waiman Long wrote:

v8->v9:
   - Integrate PeterZ's version of the queue spinlock patch with some
 modification:
 http://lkml.kernel.org/r/20140310154236.038181...@infradead.org
   - Break the more complex patches into smaller ones to ease review 
effort.

   - Fix a racing condition in the PV qspinlock code.


I am not seeing anything mentioning that the overcommit scenario
for KVM and Xen had been fixed. Or was the 'racing condition' said
issue?


Saw changes in patch 18 that fixes for kvm (19 for xen). 'll
test the series and confirm.



The main fix is by replacing some barrier() with smp_mb(). The 
additional changes in kvm and xen are not the main one.


-Longman

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v9 00/19] qspinlock: a 4-byte queue spinlock with PV support

2014-04-17 Thread Waiman Long

On 04/17/2014 01:23 PM, Konrad Rzeszutek Wilk wrote:

On Thu, Apr 17, 2014 at 11:03:52AM -0400, Waiman Long wrote:

v8->v9:
   - Integrate PeterZ's version of the queue spinlock patch with some
 modification:
 http://lkml.kernel.org/r/20140310154236.038181...@infradead.org
   - Break the more complex patches into smaller ones to ease review effort.
   - Fix a racing condition in the PV qspinlock code.

I am not seeing anything mentioning that the overcommit scenario
for KVM and Xen had been fixed. Or was the 'racing condition' said
issue?

Thanks.


The hanging is caused by a racing condition which should be fixed in the 
v9 patch. Please let me know if you are still seeing it.


-Longman
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v9 00/19] qspinlock: a 4-byte queue spinlock with PV support

2014-04-17 Thread Raghavendra K T

On 04/17/2014 10:53 PM, Konrad Rzeszutek Wilk wrote:

On Thu, Apr 17, 2014 at 11:03:52AM -0400, Waiman Long wrote:

v8->v9:
   - Integrate PeterZ's version of the queue spinlock patch with some
 modification:
 http://lkml.kernel.org/r/20140310154236.038181...@infradead.org
   - Break the more complex patches into smaller ones to ease review effort.
   - Fix a racing condition in the PV qspinlock code.


I am not seeing anything mentioning that the overcommit scenario
for KVM and Xen had been fixed. Or was the 'racing condition' said
issue?


Saw changes in patch 18 that fixes for kvm (19 for xen). 'll
test the series and confirm.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v9 00/19] qspinlock: a 4-byte queue spinlock with PV support

2014-04-17 Thread Konrad Rzeszutek Wilk
On Thu, Apr 17, 2014 at 11:03:52AM -0400, Waiman Long wrote:
> v8->v9:
>   - Integrate PeterZ's version of the queue spinlock patch with some
> modification:
> http://lkml.kernel.org/r/20140310154236.038181...@infradead.org
>   - Break the more complex patches into smaller ones to ease review effort.
>   - Fix a racing condition in the PV qspinlock code.

I am not seeing anything mentioning that the overcommit scenario
for KVM and Xen had been fixed. Or was the 'racing condition' said
issue?

Thanks.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH v9 00/19] qspinlock: a 4-byte queue spinlock with PV support

2014-04-17 Thread Waiman Long
v8->v9:
  - Integrate PeterZ's version of the queue spinlock patch with some
modification:
http://lkml.kernel.org/r/20140310154236.038181...@infradead.org
  - Break the more complex patches into smaller ones to ease review effort.
  - Fix a racing condition in the PV qspinlock code.

v7->v8:
  - Remove one unneeded atomic operation from the slowpath, thus
improving performance.
  - Simplify some of the codes and add more comments.
  - Test for X86_FEATURE_HYPERVISOR CPU feature bit to enable/disable
unfair lock.
  - Reduce unfair lock slowpath lock stealing frequency depending
on its distance from the queue head.
  - Add performance data for IvyBridge-EX CPU.

v6->v7:
  - Remove an atomic operation from the 2-task contending code
  - Shorten the names of some macros
  - Make the queue waiter to attempt to steal lock when unfair lock is
enabled.
  - Remove lock holder kick from the PV code and fix a race condition
  - Run the unfair lock & PV code on overcommitted KVM guests to collect
performance data.

v5->v6:
 - Change the optimized 2-task contending code to make it fairer at the
   expense of a bit of performance.
 - Add a patch to support unfair queue spinlock for Xen.
 - Modify the PV qspinlock code to follow what was done in the PV
   ticketlock.
 - Add performance data for the unfair lock as well as the PV
   support code.

v4->v5:
 - Move the optimized 2-task contending code to the generic file to
   enable more architectures to use it without code duplication.
 - Address some of the style-related comments by PeterZ.
 - Allow the use of unfair queue spinlock in a real para-virtualized
   execution environment.
 - Add para-virtualization support to the qspinlock code by ensuring
   that the lock holder and queue head stay alive as much as possible.

v3->v4:
 - Remove debugging code and fix a configuration error
 - Simplify the qspinlock structure and streamline the code to make it
   perform a bit better
 - Add an x86 version of asm/qspinlock.h for holding x86 specific
   optimization.
 - Add an optimized x86 code path for 2 contending tasks to improve
   low contention performance.

v2->v3:
 - Simplify the code by using numerous mode only without an unfair option.
 - Use the latest smp_load_acquire()/smp_store_release() barriers.
 - Move the queue spinlock code to kernel/locking.
 - Make the use of queue spinlock the default for x86-64 without user
   configuration.
 - Additional performance tuning.

v1->v2:
 - Add some more comments to document what the code does.
 - Add a numerous CPU mode to support >= 16K CPUs
 - Add a configuration option to allow lock stealing which can further
   improve performance in many cases.
 - Enable wakeup of queue head CPU at unlock time for non-numerous
   CPU mode.

This patch set has 3 different sections:
 1) Patches 1-7: Introduces a queue-based spinlock implementation that
can replace the default ticket spinlock without increasing the
size of the spinlock data structure. As a result, critical kernel
data structures that embed spinlock won't increase in size and
break data alignments.
 2) Patches 8-13: Enables the use of unfair queue spinlock in a
virtual guest. This can resolve some of the locking related
performance issues due to the fact that the next CPU to get the
lock may have been scheduled out for a period of time.
 3) Patches 14-19: Enable qspinlock para-virtualization support
by halting the waiting CPUs after spinning for a certain amount of
time. The unlock code will detect the a sleeping waiter and wake it
up. This is essentially the same logic as the PV ticketlock code.

The queue spinlock has slightly better performance than the ticket
spinlock in uncontended case. Its performance can be much better
with moderate to heavy contention.  This patch has the potential of
improving the performance of all the workloads that have moderate to
heavy spinlock contention.

The queue spinlock is especially suitable for NUMA machines with at
least 2 sockets, though noticeable performance benefit probably won't
show up in machines with less than 4 sockets.

The purpose of this patch set is not to solve any particular spinlock
contention problems. Those need to be solved by refactoring the code
to make more efficient use of the lock or finer granularity ones. The
main purpose is to make the lock contention problems more tolerable
until someone can spend the time and effort to fix them.

Waiman Long (19):
  qspinlock: A simple generic 4-byte queue spinlock
  qspinlock, x86: Enable x86-64 to use queue spinlock
  qspinlock: Add pending bit
  qspinlock: Extract out the exchange of tail code word
  qspinlock: Optimize for smaller NR_CPUS
  qspinlock: prolong the stay in the pending bit path
  qspinlock: Use a simple write to grab the lock, if applicable
  qspinlock: Make a new qnode structure to support virtualization
  qspinlock: Prepare for unfair lock support
  qspinlock, x86: Allow unfair spinlock