RE: [v3 00/26] Add VT-d Posted-Interrupts support
> -Original Message- > From: Wu, Feng > Sent: Friday, December 12, 2014 11:15 PM > To: t...@linutronix.de; mi...@redhat.com; h...@zytor.com; x...@kernel.org; > g...@kernel.org; pbonz...@redhat.com; dw...@infradead.org; > j...@8bytes.org; alex.william...@redhat.com; jiang@linux.intel.com > Cc: eric.au...@linaro.org; linux-ker...@vger.kernel.org; > io...@lists.linux-foundation.org; kvm@vger.kernel.org; Wu, Feng > Subject: [v3 00/26] Add VT-d Posted-Interrupts support > > VT-d Posted-Interrupts is an enhancement to CPU side Posted-Interrupt. > With VT-d Posted-Interrupts enabled, external interrupts from > direct-assigned devices can be delivered to guests without VMM > intervention when guest is running in non-root mode. > > You can find the VT-d Posted-Interrtups Spec. in the following URL: > http://www.intel.com/content/www/us/en/intelligent-systems/intel-technolog > y/vt-directed-io-spec.html > > v1->v2: > * Use VFIO framework to enable this feature, the VFIO part of this series is > base on Eric's patch "[PATCH v3 0/8] KVM-VFIO IRQ forward control" > * Rebase this patchset on > git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git, > then revise some irq logic based on the new hierarchy irqdomain patches > provided > by Jiang Liu > > v2->v3: > * Adjust the Posted-interrupts Descriptor updating logic when vCPU is > preempted or blocked. > * KVM_DEV_VFIO_DEVICE_POSTING_IRQ --> > KVM_DEV_VFIO_DEVICE_POST_IRQ > * __KVM_HAVE_ARCH_KVM_VFIO_POSTING --> > __KVM_HAVE_ARCH_KVM_VFIO_POST > * Add KVM_DEV_VFIO_DEVICE_UNPOST_IRQ attribute for VFIO irq, which > can be used to change back to remapping mode. > * Fix typo > > This patch series is made of the following groups: > 1-6: Some preparation changes in iommu and irq component, this is based on > the > new hierarchy irqdomain logic. > 7-9, 26: IOMMU changes for VT-d Posted-Interrupts, such as, feature > detection, > command line parameter. > 10-17, 22-25: Changes related to KVM itself. > 18-20: Changes in VFIO component, this part was previously sent out as > "[RFC PATCH v2 0/2] kvm-vfio: implement the vfio skeleton for VT-d > Posted-Interrupts" > 21: x86 irq related changes > > Feng Wu (26): > genirq: Introduce irq_set_vcpu_affinity() to target an interrupt to a > VCPU > iommu: Add new member capability to struct irq_remap_ops > iommu, x86: Define new irte structure for VT-d Posted-Interrupts > iommu, x86: Implement irq_set_vcpu_affinity for intel_ir_chip > x86, irq: Implement irq_set_vcpu_affinity for pci_msi_ir_controller > iommu, x86: No need to migrating irq for VT-d Posted-Interrupts > iommu, x86: Add cap_pi_support() to detect VT-d PI capability > iommu, x86: Add intel_irq_remapping_capability() for Intel > iommu, x86: define irq_remapping_cap() > KVM: change struct pi_desc for VT-d Posted-Interrupts > KVM: Add some helper functions for Posted-Interrupts > KVM: Initialize VT-d Posted-Interrupts Descriptor > KVM: Define a new interface kvm_find_dest_vcpu() for VT-d PI > KVM: Get Posted-Interrupts descriptor address from struct kvm_vcpu > KVM: add interfaces to control PI outside vmx > KVM: Make struct kvm_irq_routing_table accessible > KVM: make kvm_set_msi_irq() public > KVM: kvm-vfio: User API for VT-d Posted-Interrupts > KVM: kvm-vfio: implement the VFIO skeleton for VT-d Posted-Interrupts > KVM: x86: kvm-vfio: VT-d posted-interrupts setup > x86, irq: Define a global vector for VT-d Posted-Interrupts > KVM: Define a wakeup worker thread for vCPU > KVM: Update Posted-Interrupts Descriptor when vCPU is preempted > KVM: Update Posted-Interrupts Descriptor when vCPU is blocked > KVM: Suppress posted-interrupt when 'SN' is set > iommu/vt-d: Add a command line parameter for VT-d posted-interrupts > > Documentation/kernel-parameters.txt| 1 + > Documentation/virtual/kvm/devices/vfio.txt | 9 ++ > arch/x86/include/asm/entry_arch.h | 2 + > arch/x86/include/asm/hardirq.h | 1 + > arch/x86/include/asm/hw_irq.h | 2 + > arch/x86/include/asm/irq_remapping.h | 11 ++ > arch/x86/include/asm/irq_vectors.h | 1 + > arch/x86/include/asm/kvm_host.h| 12 ++ > arch/x86/kernel/apic/msi.c | 1 + > arch/x86/kernel/entry_64.S | 2 + > arch/x86/kernel/irq.c | 27 > arch/x86/kernel/irqinit.c | 2 + > arch/x86/kvm/Makefile | 2 +- > arch/x86/kvm/kvm_vfio_x86.c| 77 + > arch/x86/kvm/vmx.c | 244 > - > arch/x86/kvm/x86.c | 22 ++- > drivers/iommu/intel_irq_remapping.c| 68 +++- > drivers/iommu/irq_remapping.c | 24 ++- > drivers/iommu/irq_remapping.h | 8 + > include/linux/dmar.h | 32 > include/linux/intel-iommu.h
Re: [v3 13/26] KVM: Define a new interface kvm_find_dest_vcpu() for VT-d PI
Radim Kr?má? wrote: > 2015-01-14 01:27+, Wu, Feng: >>> the new hardware even doesn't consider the TPR for lowest priority interrupts >>> delivery. >>> >>> A bold move ... what hardware was the first to do so? >> >> I think it was starting with Nehalem. > > Thanks, (Could be that QPI can't inform about TPR changes anymore ...) > > I played with Linux's TPR on Haswell and found that is has no effect. Sorry for jumping into the discussion, but doesn’t it depend on IA32_MISC_ENABLE[23]? This bit disables xTPR messages. On my machine it is set (probably by the BIOS), but since there is no IA32_MISC_ENABLE is not locked for changes, the OS can control it. Nadav-- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[PATCH v14 00/11] qspinlock: a 4-byte queue spinlock with PV support
v13->v14: - Patches 1 & 2: Add queue_spin_unlock_wait() to accommodate commit 78bff1c86 from Oleg Nesterov. - Fix the system hang problem when using PV qspinlock in an over-committed guest due to a racing condition in the pv_set_head_in_tail() function. - Increase the MAYHALT_THRESHOLD from 10 to 1024. - Change kick_cpu into a regular function pointer instead of a callee-saved function. - Change lock statistics code to use separate bits for different statistics. v12->v13: - Change patch 9 to generate separate versions of the queue_spin_lock_slowpath functions for bare metal and PV guest. This reduces the performance impact of the PV code on bare metal systems. v11->v12: - Based on PeterZ's version of the qspinlock patch (https://lkml.org/lkml/2014/6/15/63). - Incorporated many of the review comments from Konrad Wilk and Paolo Bonzini. - The pvqspinlock code is largely from my previous version with PeterZ's way of going from queue tail to head and his idea of using callee saved calls to KVM and XEN codes. v10->v11: - Use a simple test-and-set unfair lock to simplify the code, but performance may suffer a bit for large guest with many CPUs. - Take out Raghavendra KT's test results as the unfair lock changes may render some of his results invalid. - Add PV support without increasing the size of the core queue node structure. - Other minor changes to address some of the feedback comments. v9->v10: - Make some minor changes to qspinlock.c to accommodate review feedback. - Change author to PeterZ for 2 of the patches. - Include Raghavendra KT's test results in patch 18. v8->v9: - Integrate PeterZ's version of the queue spinlock patch with some modification: http://lkml.kernel.org/r/20140310154236.038181...@infradead.org - Break the more complex patches into smaller ones to ease review effort. - Fix a racing condition in the PV qspinlock code. v7->v8: - Remove one unneeded atomic operation from the slowpath, thus improving performance. - Simplify some of the codes and add more comments. - Test for X86_FEATURE_HYPERVISOR CPU feature bit to enable/disable unfair lock. - Reduce unfair lock slowpath lock stealing frequency depending on its distance from the queue head. - Add performance data for IvyBridge-EX CPU. v6->v7: - Remove an atomic operation from the 2-task contending code - Shorten the names of some macros - Make the queue waiter to attempt to steal lock when unfair lock is enabled. - Remove lock holder kick from the PV code and fix a race condition - Run the unfair lock & PV code on overcommitted KVM guests to collect performance data. v5->v6: - Change the optimized 2-task contending code to make it fairer at the expense of a bit of performance. - Add a patch to support unfair queue spinlock for Xen. - Modify the PV qspinlock code to follow what was done in the PV ticketlock. - Add performance data for the unfair lock as well as the PV support code. v4->v5: - Move the optimized 2-task contending code to the generic file to enable more architectures to use it without code duplication. - Address some of the style-related comments by PeterZ. - Allow the use of unfair queue spinlock in a real para-virtualized execution environment. - Add para-virtualization support to the qspinlock code by ensuring that the lock holder and queue head stay alive as much as possible. v3->v4: - Remove debugging code and fix a configuration error - Simplify the qspinlock structure and streamline the code to make it perform a bit better - Add an x86 version of asm/qspinlock.h for holding x86 specific optimization. - Add an optimized x86 code path for 2 contending tasks to improve low contention performance. v2->v3: - Simplify the code by using numerous mode only without an unfair option. - Use the latest smp_load_acquire()/smp_store_release() barriers. - Move the queue spinlock code to kernel/locking. - Make the use of queue spinlock the default for x86-64 without user configuration. - Additional performance tuning. v1->v2: - Add some more comments to document what the code does. - Add a numerous CPU mode to support >= 16K CPUs - Add a configuration option to allow lock stealing which can further improve performance in many cases. - Enable wakeup of queue head CPU at unlock time for non-numerous CPU mode. This patch set has 3 different sections: 1) Patches 1-6: Introduces a queue-based spinlock implementation that can replace the default ticket spinlock without increasing the size of the spinlock data structure. As a result, critical kernel data structures that embed spinlock won't increase in size and break data alignments. 2) Patch 7: Enables the use of unfair lock in a virtual guest. This can resolve some of the locking related performance issues due to halting the waiting CPUs after spinning for a certain amount of
[PATCH v14 02/11] qspinlock, x86: Enable x86-64 to use queue spinlock
This patch makes the necessary changes at the x86 architecture specific layer to enable the use of queue spinlock for x86-64. As x86-32 machines are typically not multi-socket. The benefit of queue spinlock may not be apparent. So queue spinlock is not enabled. Currently, there is some incompatibilities between the para-virtualized spinlock code (which hard-codes the use of ticket spinlock) and the queue spinlock. Therefore, the use of queue spinlock is disabled when the para-virtualized spinlock is enabled. The arch/x86/include/asm/qspinlock.h header file includes some x86 specific optimization which will make the queue spinlock code perform better than the generic implementation. Signed-off-by: Waiman Long Signed-off-by: Peter Zijlstra --- arch/x86/Kconfig |1 + arch/x86/include/asm/qspinlock.h | 25 + arch/x86/include/asm/spinlock.h |5 + arch/x86/include/asm/spinlock_types.h |4 4 files changed, 35 insertions(+), 0 deletions(-) create mode 100644 arch/x86/include/asm/qspinlock.h diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index ba397bd..420f057 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -123,6 +123,7 @@ config X86 select MODULES_USE_ELF_RELA if X86_64 select CLONE_BACKWARDS if X86_32 select ARCH_USE_BUILTIN_BSWAP + select ARCH_USE_QUEUE_SPINLOCK select ARCH_USE_QUEUE_RWLOCK select OLD_SIGSUSPEND3 if X86_32 || IA32_EMULATION select OLD_SIGACTION if X86_32 diff --git a/arch/x86/include/asm/qspinlock.h b/arch/x86/include/asm/qspinlock.h new file mode 100644 index 000..a6a8762 --- /dev/null +++ b/arch/x86/include/asm/qspinlock.h @@ -0,0 +1,25 @@ +#ifndef _ASM_X86_QSPINLOCK_H +#define _ASM_X86_QSPINLOCK_H + +#include + +#ifndef CONFIG_X86_PPRO_FENCE + +#definequeue_spin_unlock queue_spin_unlock +/** + * queue_spin_unlock - release a queue spinlock + * @lock : Pointer to queue spinlock structure + * + * An effective smp_store_release() on the least-significant byte. + */ +static inline void queue_spin_unlock(struct qspinlock *lock) +{ + barrier(); + ACCESS_ONCE(*(u8 *)lock) = 0; +} + +#endif /* !CONFIG_X86_PPRO_FENCE */ + +#include + +#endif /* _ASM_X86_QSPINLOCK_H */ diff --git a/arch/x86/include/asm/spinlock.h b/arch/x86/include/asm/spinlock.h index 625660f..5a5b90c 100644 --- a/arch/x86/include/asm/spinlock.h +++ b/arch/x86/include/asm/spinlock.h @@ -42,6 +42,10 @@ extern struct static_key paravirt_ticketlocks_enabled; static __always_inline bool static_key_false(struct static_key *key); +#ifdef CONFIG_QUEUE_SPINLOCK +#include +#else + #ifdef CONFIG_PARAVIRT_SPINLOCKS static inline void __ticket_enter_slowpath(arch_spinlock_t *lock) @@ -198,6 +202,7 @@ static inline void arch_spin_unlock_wait(arch_spinlock_t *lock) cpu_relax(); } } +#endif /* CONFIG_QUEUE_SPINLOCK */ /* * Read-write spinlocks, allowing multiple readers diff --git a/arch/x86/include/asm/spinlock_types.h b/arch/x86/include/asm/spinlock_types.h index 5f9d757..5d654a1 100644 --- a/arch/x86/include/asm/spinlock_types.h +++ b/arch/x86/include/asm/spinlock_types.h @@ -23,6 +23,9 @@ typedef u32 __ticketpair_t; #define TICKET_SHIFT (sizeof(__ticket_t) * 8) +#ifdef CONFIG_QUEUE_SPINLOCK +#include +#else typedef struct arch_spinlock { union { __ticketpair_t head_tail; @@ -33,6 +36,7 @@ typedef struct arch_spinlock { } arch_spinlock_t; #define __ARCH_SPIN_LOCK_UNLOCKED { { 0 } } +#endif /* CONFIG_QUEUE_SPINLOCK */ #include -- 1.7.1 -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[PATCH v14 03/11] qspinlock: Add pending bit
From: Peter Zijlstra Because the qspinlock needs to touch a second cacheline (the per-cpu mcs_nodes[]); add a pending bit and allow a single in-word spinner before we punt to the second cacheline. It is possible so observe the pending bit without the locked bit when the last owner has just released but the pending owner has not yet taken ownership. In this case we would normally queue -- because the pending bit is already taken. However, in this case the pending bit is guaranteed to be released 'soon', therefore wait for it and avoid queueing. Signed-off-by: Peter Zijlstra Signed-off-by: Waiman Long --- include/asm-generic/qspinlock_types.h | 12 +++- kernel/locking/qspinlock.c| 119 +++-- 2 files changed, 107 insertions(+), 24 deletions(-) diff --git a/include/asm-generic/qspinlock_types.h b/include/asm-generic/qspinlock_types.h index 67a2110..4196694 100644 --- a/include/asm-generic/qspinlock_types.h +++ b/include/asm-generic/qspinlock_types.h @@ -36,8 +36,9 @@ typedef struct qspinlock { * Bitfields in the atomic value: * * 0- 7: locked byte - * 8- 9: tail index - * 10-31: tail cpu (+1) + * 8: pending + * 9-10: tail index + * 11-31: tail cpu (+1) */ #define_Q_SET_MASK(type) (((1U << _Q_ ## type ## _BITS) - 1)\ << _Q_ ## type ## _OFFSET) @@ -45,7 +46,11 @@ typedef struct qspinlock { #define _Q_LOCKED_BITS 8 #define _Q_LOCKED_MASK _Q_SET_MASK(LOCKED) -#define _Q_TAIL_IDX_OFFSET (_Q_LOCKED_OFFSET + _Q_LOCKED_BITS) +#define _Q_PENDING_OFFSET (_Q_LOCKED_OFFSET + _Q_LOCKED_BITS) +#define _Q_PENDING_BITS1 +#define _Q_PENDING_MASK_Q_SET_MASK(PENDING) + +#define _Q_TAIL_IDX_OFFSET (_Q_PENDING_OFFSET + _Q_PENDING_BITS) #define _Q_TAIL_IDX_BITS 2 #define _Q_TAIL_IDX_MASK _Q_SET_MASK(TAIL_IDX) @@ -54,5 +59,6 @@ typedef struct qspinlock { #define _Q_TAIL_CPU_MASK _Q_SET_MASK(TAIL_CPU) #define _Q_LOCKED_VAL (1U << _Q_LOCKED_OFFSET) +#define _Q_PENDING_VAL (1U << _Q_PENDING_OFFSET) #endif /* __ASM_GENERIC_QSPINLOCK_TYPES_H */ diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c index c114076..226b11d 100644 --- a/kernel/locking/qspinlock.c +++ b/kernel/locking/qspinlock.c @@ -92,24 +92,28 @@ static inline struct mcs_spinlock *decode_tail(u32 tail) return per_cpu_ptr(&mcs_nodes[idx], cpu); } +#define _Q_LOCKED_PENDING_MASK (_Q_LOCKED_MASK | _Q_PENDING_MASK) + /** * queue_spin_lock_slowpath - acquire the queue spinlock * @lock: Pointer to queue spinlock structure * @val: Current value of the queue spinlock 32-bit word * - * (queue tail, lock value) - * - * fast :slow : unlock - *: : - * uncontended (0,0) --:--> (0,1) :--> (*,0) - *: | ^./ : - *: v \ | : - * uncontended:(n,x) --+--> (n,0) | : - * queue: | ^--' | : - *: v | : - * contended :(*,x) --+--> (*,0) -> (*,1) ---' : - * queue: ^--' : + * (queue tail, pending bit, lock value) * + * fast :slow :unlock + * : : + * uncontended (0,0,0) -:--> (0,0,1) --:--> (*,*,0) + * : | ^.--. / : + * : v \ \| : + * pending :(0,1,1) +--> (0,1,0) \ | : + * : | ^--' | | : + * : v | | : + * uncontended :(n,x,y) +--> (n,0,0) --' | : + * queue : | ^--' | : + * : v | : + * contended :(*,x,y) +--> (*,0,0) ---> (*,0,1) -' : + * queue : ^--' : */ void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val) { @@ -119,6 +123,75 @@ void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val) BUILD_BUG_ON(CONFIG_NR_CPUS >= (1U << _Q_TAIL_CPU_BITS)); + /* +* wait for in-progress pending->locked hand-overs +* +* 0,1,0 -> 0,0,1 +*/ + if (val == _Q_PENDING_VAL) { + while ((val = atomic_read(&lock->val)) == _Q_PENDING_VAL) + cpu_relax(); + } + + /* +
[PATCH v14 01/11] qspinlock: A simple generic 4-byte queue spinlock
This patch introduces a new generic queue spinlock implementation that can serve as an alternative to the default ticket spinlock. Compared with the ticket spinlock, this queue spinlock should be almost as fair as the ticket spinlock. It has about the same speed in single-thread and it can be much faster in high contention situations especially when the spinlock is embedded within the data structure to be protected. Only in light to moderate contention where the average queue depth is around 1-3 will this queue spinlock be potentially a bit slower due to the higher slowpath overhead. This queue spinlock is especially suit to NUMA machines with a large number of cores as the chance of spinlock contention is much higher in those machines. The cost of contention is also higher because of slower inter-node memory traffic. Due to the fact that spinlocks are acquired with preemption disabled, the process will not be migrated to another CPU while it is trying to get a spinlock. Ignoring interrupt handling, a CPU can only be contending in one spinlock at any one time. Counting soft IRQ, hard IRQ and NMI, a CPU can only have a maximum of 4 concurrent lock waiting activities. By allocating a set of per-cpu queue nodes and used them to form a waiting queue, we can encode the queue node address into a much smaller 24-bit size (including CPU number and queue node index) leaving one byte for the lock. Please note that the queue node is only needed when waiting for the lock. Once the lock is acquired, the queue node can be released to be used later. Signed-off-by: Waiman Long Signed-off-by: Peter Zijlstra --- include/asm-generic/qspinlock.h | 132 + include/asm-generic/qspinlock_types.h | 58 + kernel/Kconfig.locks |7 + kernel/locking/Makefile |1 + kernel/locking/mcs_spinlock.h |1 + kernel/locking/qspinlock.c| 207 + 6 files changed, 406 insertions(+), 0 deletions(-) create mode 100644 include/asm-generic/qspinlock.h create mode 100644 include/asm-generic/qspinlock_types.h create mode 100644 kernel/locking/qspinlock.c diff --git a/include/asm-generic/qspinlock.h b/include/asm-generic/qspinlock.h new file mode 100644 index 000..aafb7b4 --- /dev/null +++ b/include/asm-generic/qspinlock.h @@ -0,0 +1,132 @@ +/* + * Queue spinlock + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License as published by + * the Free Software Foundation; either version 2 of the License, or + * (at your option) any later version. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * (C) Copyright 2013-2014 Hewlett-Packard Development Company, L.P. + * + * Authors: Waiman Long + */ +#ifndef __ASM_GENERIC_QSPINLOCK_H +#define __ASM_GENERIC_QSPINLOCK_H + +#include + +/** + * queue_spin_is_locked - is the spinlock locked? + * @lock: Pointer to queue spinlock structure + * Return: 1 if it is locked, 0 otherwise + */ +static __always_inline int queue_spin_is_locked(struct qspinlock *lock) +{ + return atomic_read(&lock->val); +} + +/** + * queue_spin_value_unlocked - is the spinlock structure unlocked? + * @lock: queue spinlock structure + * Return: 1 if it is unlocked, 0 otherwise + * + * N.B. Whenever there are tasks waiting for the lock, it is considered + * locked wrt the lockref code to avoid lock stealing by the lockref + * code and change things underneath the lock. This also allows some + * optimizations to be applied without conflict with lockref. + */ +static __always_inline int queue_spin_value_unlocked(struct qspinlock lock) +{ + return !atomic_read(&lock.val); +} + +/** + * queue_spin_is_contended - check if the lock is contended + * @lock : Pointer to queue spinlock structure + * Return: 1 if lock contended, 0 otherwise + */ +static __always_inline int queue_spin_is_contended(struct qspinlock *lock) +{ + return atomic_read(&lock->val) & ~_Q_LOCKED_MASK; +} +/** + * queue_spin_trylock - try to acquire the queue spinlock + * @lock : Pointer to queue spinlock structure + * Return: 1 if lock acquired, 0 if failed + */ +static __always_inline int queue_spin_trylock(struct qspinlock *lock) +{ + if (!atomic_read(&lock->val) && + (atomic_cmpxchg(&lock->val, 0, _Q_LOCKED_VAL) == 0)) + return 1; + return 0; +} + +extern void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val); + +/** + * queue_spin_lock - acquire a queue spinlock + * @lock: Pointer to queue spinlock structure + */ +static __always_inline void queue_spin_lock(struct qspinlock *lock) +{ + u32 val; + + val = atomic_cmpxchg(&lock->val, 0, _Q_LOCKED_VAL); + if (lik
[PATCH v14 05/11] qspinlock: Optimize for smaller NR_CPUS
From: Peter Zijlstra When we allow for a max NR_CPUS < 2^14 we can optimize the pending wait-acquire and the xchg_tail() operations. By growing the pending bit to a byte, we reduce the tail to 16bit. This means we can use xchg16 for the tail part and do away with all the repeated compxchg() operations. This in turn allows us to unconditionally acquire; the locked state as observed by the wait loops cannot change. And because both locked and pending are now a full byte we can use simple stores for the state transition, obviating one atomic operation entirely. This optimization is needed to make the qspinlock achieve performance parity with ticket spinlock at light load. All this is horribly broken on Alpha pre EV56 (and any other arch that cannot do single-copy atomic byte stores). Signed-off-by: Peter Zijlstra Signed-off-by: Waiman Long --- include/asm-generic/qspinlock_types.h | 13 ++ kernel/locking/qspinlock.c| 71 - 2 files changed, 83 insertions(+), 1 deletions(-) diff --git a/include/asm-generic/qspinlock_types.h b/include/asm-generic/qspinlock_types.h index 88d647c..01b46df 100644 --- a/include/asm-generic/qspinlock_types.h +++ b/include/asm-generic/qspinlock_types.h @@ -35,6 +35,14 @@ typedef struct qspinlock { /* * Bitfields in the atomic value: * + * When NR_CPUS < 16K + * 0- 7: locked byte + * 8: pending + * 9-15: not used + * 16-17: tail index + * 18-31: tail cpu (+1) + * + * When NR_CPUS >= 16K * 0- 7: locked byte * 8: pending * 9-10: tail index @@ -47,7 +55,11 @@ typedef struct qspinlock { #define _Q_LOCKED_MASK _Q_SET_MASK(LOCKED) #define _Q_PENDING_OFFSET (_Q_LOCKED_OFFSET + _Q_LOCKED_BITS) +#if CONFIG_NR_CPUS < (1U << 14) +#define _Q_PENDING_BITS8 +#else #define _Q_PENDING_BITS1 +#endif #define _Q_PENDING_MASK_Q_SET_MASK(PENDING) #define _Q_TAIL_IDX_OFFSET (_Q_PENDING_OFFSET + _Q_PENDING_BITS) @@ -58,6 +70,7 @@ typedef struct qspinlock { #define _Q_TAIL_CPU_BITS (32 - _Q_TAIL_CPU_OFFSET) #define _Q_TAIL_CPU_MASK _Q_SET_MASK(TAIL_CPU) +#define _Q_TAIL_OFFSET _Q_TAIL_IDX_OFFSET #define _Q_TAIL_MASK (_Q_TAIL_IDX_MASK | _Q_TAIL_CPU_MASK) #define _Q_LOCKED_VAL (1U << _Q_LOCKED_OFFSET) diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c index 48bd2ad..7c127b4 100644 --- a/kernel/locking/qspinlock.c +++ b/kernel/locking/qspinlock.c @@ -22,6 +22,7 @@ #include #include #include +#include #include /* @@ -54,6 +55,10 @@ * node; whereby avoiding the need to carry a node from lock to unlock, and * preserving existing lock API. This also makes the unlock code simpler and * faster. + * + * N.B. The current implementation only supports architectures that allow + * atomic operations on smaller 8-bit and 16-bit data types. + * */ #include "mcs_spinlock.h" @@ -94,6 +99,64 @@ static inline struct mcs_spinlock *decode_tail(u32 tail) #define _Q_LOCKED_PENDING_MASK (_Q_LOCKED_MASK | _Q_PENDING_MASK) +/* + * By using the whole 2nd least significant byte for the pending bit, we + * can allow better optimization of the lock acquisition for the pending + * bit holder. + */ +#if _Q_PENDING_BITS == 8 + +struct __qspinlock { + union { + atomic_t val; + struct { +#ifdef __LITTLE_ENDIAN + u16 locked_pending; + u16 tail; +#else + u16 tail; + u16 locked_pending; +#endif + }; + }; +}; + +/** + * clear_pending_set_locked - take ownership and clear the pending bit. + * @lock: Pointer to queue spinlock structure + * @val : Current value of the queue spinlock 32-bit word + * + * *,1,0 -> *,0,1 + * + * Lock stealing is not allowed if this function is used. + */ +static __always_inline void +clear_pending_set_locked(struct qspinlock *lock, u32 val) +{ + struct __qspinlock *l = (void *)lock; + + ACCESS_ONCE(l->locked_pending) = _Q_LOCKED_VAL; +} + +/* + * xchg_tail - Put in the new queue tail code word & retrieve previous one + * @lock : Pointer to queue spinlock structure + * @tail : The new queue tail code word + * Return: The previous queue tail code word + * + * xchg(lock, tail) + * + * p,*,* -> n,*,* ; prev = xchg(lock, node) + */ +static __always_inline u32 xchg_tail(struct qspinlock *lock, u32 tail) +{ + struct __qspinlock *l = (void *)lock; + + return (u32)xchg(&l->tail, tail >> _Q_TAIL_OFFSET) << _Q_TAIL_OFFSET; +} + +#else /* _Q_PENDING_BITS == 8 */ + /** * clear_pending_set_locked - take ownership and clear the pending bit. * @lock: Pointer to queue spinlock structure @@ -141,6 +204,7 @@ static __always_inline u32 xchg_tail(struct qspinlock *lock, u32 tail) } return old; } +#endif /* _Q_PENDING_BITS == 8 */ /** * queue_spin_lock_slowpath - acquire the queue spi
[PATCH v14 04/11] qspinlock: Extract out code snippets for the next patch
This is a preparatory patch that extracts out the following 2 code snippets to prepare for the next performance optimization patch. 1) the logic for the exchange of new and previous tail code words into a new xchg_tail() function. 2) the logic for clearing the pending bit and setting the locked bit into a new clear_pending_set_locked() function. This patch also simplifies the trylock operation before queuing by calling queue_spin_trylock() directly. Signed-off-by: Waiman Long Signed-off-by: Peter Zijlstra --- include/asm-generic/qspinlock_types.h |2 + kernel/locking/qspinlock.c| 91 +--- 2 files changed, 62 insertions(+), 31 deletions(-) diff --git a/include/asm-generic/qspinlock_types.h b/include/asm-generic/qspinlock_types.h index 4196694..88d647c 100644 --- a/include/asm-generic/qspinlock_types.h +++ b/include/asm-generic/qspinlock_types.h @@ -58,6 +58,8 @@ typedef struct qspinlock { #define _Q_TAIL_CPU_BITS (32 - _Q_TAIL_CPU_OFFSET) #define _Q_TAIL_CPU_MASK _Q_SET_MASK(TAIL_CPU) +#define _Q_TAIL_MASK (_Q_TAIL_IDX_MASK | _Q_TAIL_CPU_MASK) + #define _Q_LOCKED_VAL (1U << _Q_LOCKED_OFFSET) #define _Q_PENDING_VAL (1U << _Q_PENDING_OFFSET) diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c index 226b11d..48bd2ad 100644 --- a/kernel/locking/qspinlock.c +++ b/kernel/locking/qspinlock.c @@ -95,6 +95,54 @@ static inline struct mcs_spinlock *decode_tail(u32 tail) #define _Q_LOCKED_PENDING_MASK (_Q_LOCKED_MASK | _Q_PENDING_MASK) /** + * clear_pending_set_locked - take ownership and clear the pending bit. + * @lock: Pointer to queue spinlock structure + * @val : Current value of the queue spinlock 32-bit word + * + * *,1,0 -> *,0,1 + */ +static __always_inline void +clear_pending_set_locked(struct qspinlock *lock, u32 val) +{ + u32 new, old; + + for (;;) { + new = (val & ~_Q_PENDING_MASK) | _Q_LOCKED_VAL; + + old = atomic_cmpxchg(&lock->val, val, new); + if (old == val) + break; + + val = old; + } +} + +/** + * xchg_tail - Put in the new queue tail code word & retrieve previous one + * @lock : Pointer to queue spinlock structure + * @tail : The new queue tail code word + * Return: The previous queue tail code word + * + * xchg(lock, tail) + * + * p,*,* -> n,*,* ; prev = xchg(lock, node) + */ +static __always_inline u32 xchg_tail(struct qspinlock *lock, u32 tail) +{ + u32 old, new, val = atomic_read(&lock->val); + + for (;;) { + new = (val & _Q_LOCKED_PENDING_MASK) | tail; + old = atomic_cmpxchg(&lock->val, val, new); + if (old == val) + break; + + val = old; + } + return old; +} + +/** * queue_spin_lock_slowpath - acquire the queue spinlock * @lock: Pointer to queue spinlock structure * @val: Current value of the queue spinlock 32-bit word @@ -176,15 +224,7 @@ void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val) * * *,1,0 -> *,0,1 */ - for (;;) { - new = (val & ~_Q_PENDING_MASK) | _Q_LOCKED_VAL; - - old = atomic_cmpxchg(&lock->val, val, new); - if (old == val) - break; - - val = old; - } + clear_pending_set_locked(lock, val); return; /* @@ -201,37 +241,26 @@ queue: node->next = NULL; /* -* We have already touched the queueing cacheline; don't bother with -* pending stuff. -* -* trylock || xchg(lock, node) -* -* 0,0,0 -> 0,0,1 ; no tail, not locked -> no tail, locked. -* p,y,x -> n,y,x ; tail was p -> tail is n; preserving locked. +* We touched a (possibly) cold cacheline in the per-cpu queue node; +* attempt the trylock once more in the hope someone let go while we +* weren't watching. */ - for (;;) { - new = _Q_LOCKED_VAL; - if (val) - new = tail | (val & _Q_LOCKED_PENDING_MASK); - - old = atomic_cmpxchg(&lock->val, val, new); - if (old == val) - break; - - val = old; - } + if (queue_spin_trylock(lock)) + goto release; /* -* we won the trylock; forget about queueing. +* We have already touched the queueing cacheline; don't bother with +* pending stuff. +* +* p,*,* -> n,*,* */ - if (new == _Q_LOCKED_VAL) - goto release; + old = xchg_tail(lock, tail); /* * if there was a previous node; link it and wait until reaching the * head of the waitqueue. */ - if (old & ~_Q_LOCKED_PENDING_MASK) { + if (old & _Q_TAIL_MASK) { prev = decode_tail(old);
[PATCH v14 06/11] qspinlock: Use a simple write to grab the lock
Currently, atomic_cmpxchg() is used to get the lock. However, this is not really necessary if there is more than one task in the queue and the queue head don't need to reset the tail code. For that case, a simple write to set the lock bit is enough as the queue head will be the only one eligible to get the lock as long as it checks that both the lock and pending bits are not set. The current pending bit waiting code will ensure that the bit will not be set as soon as the tail code in the lock is set. With that change, the are some slight improvement in the performance of the queue spinlock in the 5M loop micro-benchmark run on a 4-socket Westere-EX machine as shown in the tables below. [Standalone/Embedded - same node] # of tasksBefore patchAfter patch %Change ----- -- --- 3 2324/2321 2248/2265-3%/-2% 4 2890/2896 2819/2831-2%/-2% 5 3611/3595 3522/3512-2%/-2% 6 4281/4276 4173/4160-3%/-3% 7 5018/5001 4875/4861-3%/-3% 8 5759/5750 5563/5568-3%/-3% [Standalone/Embedded - different nodes] # of tasksBefore patchAfter patch %Change ----- -- --- 312242/12237 12087/12093 -1%/-1% 410688/10696 10507/10521 -2%/-2% It was also found that this change produced a much bigger performance improvement in the newer IvyBridge-EX chip and was essentially to close the performance gap between the ticket spinlock and queue spinlock. The disk workload of the AIM7 benchmark was run on a 4-socket Westmere-EX machine with both ext4 and xfs RAM disks at 3000 users on a 3.14 based kernel. The results of the test runs were: AIM7 XFS Disk Test kernel JPMReal Time Sys TimeUsr Time - ---- ticketlock56782333.17 96.61 5.81 qspinlock 57507993.13 94.83 5.97 AIM7 EXT4 Disk Test kernel JPMReal Time Sys TimeUsr Time - ---- ticketlock1114551 16.15 509.72 7.11 qspinlock 21844668.24 232.99 6.01 The ext4 filesystem run had a much higher spinlock contention than the xfs filesystem run. The "ebizzy -m" test was also run with the following results: kernel records/s Real Time Sys TimeUsr Time -- - ticketlock 2075 10.00 216.35 3.49 qspinlock 3023 10.00 198.20 4.80 Signed-off-by: Waiman Long Signed-off-by: Peter Zijlstra --- kernel/locking/qspinlock.c | 59 1 files changed, 43 insertions(+), 16 deletions(-) diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c index 7c127b4..fb0e988 100644 --- a/kernel/locking/qspinlock.c +++ b/kernel/locking/qspinlock.c @@ -103,24 +103,33 @@ static inline struct mcs_spinlock *decode_tail(u32 tail) * By using the whole 2nd least significant byte for the pending bit, we * can allow better optimization of the lock acquisition for the pending * bit holder. + * + * This internal structure is also used by the set_locked function which + * is not restricted to _Q_PENDING_BITS == 8. */ -#if _Q_PENDING_BITS == 8 - struct __qspinlock { union { atomic_t val; - struct { #ifdef __LITTLE_ENDIAN + u8 locked; + struct { u16 locked_pending; u16 tail; + }; #else + struct { u16 tail; u16 locked_pending; -#endif }; + struct { + u8 reserved[3]; + u8 locked; + }; +#endif }; }; +#if _Q_PENDING_BITS == 8 /** * clear_pending_set_locked - take ownership and clear the pending bit. * @lock: Pointer to queue spinlock structure @@ -207,6 +216,19 @@ static __always_inline u32 xchg_tail(struct qspinlock *lock, u32 tail) #endif /* _Q_PENDING_BITS == 8 */ /** + * set_locked - Set the lock bit and own the lock + * @lock: Pointer to queue spinlock structure + * + * *,*,0 -> *,0,1 + */ +static __always_inline void set_locked(struct qspinlock *lock) +{ + struct __qspinlock *l = (void *)lock; + + ACCESS_ONCE(l->locked) = _Q_LOCKED_VAL; +} + +/** * queue_spin_lock_slowpath - acquire the queue spinlock * @lock: Pointer to queue spinlock structure * @val: Current value of the queue spinlock 32-bit word @@ -339,10 +361,13 @
[PATCH v14 07/11] qspinlock: Revert to test-and-set on hypervisors
From: Peter Zijlstra When we detect a hypervisor (!paravirt, see qspinlock paravirt support patches), revert to a simple test-and-set lock to avoid the horrors of queue preemption. Signed-off-by: Peter Zijlstra Signed-off-by: Waiman Long --- arch/x86/include/asm/qspinlock.h | 14 ++ include/asm-generic/qspinlock.h |7 +++ kernel/locking/qspinlock.c |3 +++ 3 files changed, 24 insertions(+), 0 deletions(-) diff --git a/arch/x86/include/asm/qspinlock.h b/arch/x86/include/asm/qspinlock.h index a6a8762..05a77fe 100644 --- a/arch/x86/include/asm/qspinlock.h +++ b/arch/x86/include/asm/qspinlock.h @@ -1,6 +1,7 @@ #ifndef _ASM_X86_QSPINLOCK_H #define _ASM_X86_QSPINLOCK_H +#include #include #ifndef CONFIG_X86_PPRO_FENCE @@ -20,6 +21,19 @@ static inline void queue_spin_unlock(struct qspinlock *lock) #endif /* !CONFIG_X86_PPRO_FENCE */ +#define virt_queue_spin_lock virt_queue_spin_lock + +static inline bool virt_queue_spin_lock(struct qspinlock *lock) +{ + if (!static_cpu_has(X86_FEATURE_HYPERVISOR)) + return false; + + while (atomic_cmpxchg(&lock->val, 0, _Q_LOCKED_VAL) != 0) + cpu_relax(); + + return true; +} + #include #endif /* _ASM_X86_QSPINLOCK_H */ diff --git a/include/asm-generic/qspinlock.h b/include/asm-generic/qspinlock.h index aafb7b4..82054a1 100644 --- a/include/asm-generic/qspinlock.h +++ b/include/asm-generic/qspinlock.h @@ -111,6 +111,13 @@ static inline void queue_spin_unlock_wait(struct qspinlock *lock) cpu_relax(); } +#ifndef virt_queue_spin_lock +static __always_inline bool virt_queue_spin_lock(struct qspinlock *lock) +{ + return false; +} +#endif + /* * Initializier */ diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c index fb0e988..1c1926a 100644 --- a/kernel/locking/qspinlock.c +++ b/kernel/locking/qspinlock.c @@ -257,6 +257,9 @@ void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val) BUILD_BUG_ON(CONFIG_NR_CPUS >= (1U << _Q_TAIL_CPU_BITS)); + if (virt_queue_spin_lock(lock)) + return; + /* * wait for in-progress pending->locked hand-overs * -- 1.7.1 -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[PATCH v14 09/11] pvqspinlock, x86: Add para-virtualization support
This patch adds para-virtualization support to the queue spinlock code base with minimal impact to the native case. There are some minor code changes in the generic qspinlock.c file which should be usable in other architectures. The other code changes are specific to x86 processors and so are all put under the arch/x86 directory. On the lock side, the slowpath code is split into 2 separate functions generated from the same code - one for bare metal and one for PV guest. The switching is done in the _raw_spin_lock* functions. This makes sure that the performance impact to the bare metal case is minimal, just a few NOPs in the _raw_spin_lock* functions. In the PV slowpath code, there are 2 paravirt callee saved calls that minimize register pressure. On the unlock side, however, the disabling of unlock function inlining does have some slight impact on bare metal performance. The actual paravirt code comes in 5 parts; - init_node; this initializes the extra data members required for PV state. PV state data is kept 1 cacheline ahead of the regular data. - link_and_wait_node; this replaces the regular MCS queuing code. CPU halting can happen if the wait is too long. - wait_head; this waits until the lock is avialable and the CPU will be halted if the wait is too long. - wait_check; this is called after acquiring the lock to see if the next queue head CPU is halted. If this is the case, the lock bit is changed to indicate the queue head will have to be kicked on unlock. - queue_unlock; this routine has a jump label to check if paravirt is enabled. If yes, it has to do an atomic cmpxchg to clear the lock bit or call the slowpath function to kick the queue head cpu. Tracking the head is done in two parts, firstly the pv_wait_head will store its cpu number in whichever node is pointed to by the tail part of the lock word. Secondly, pv_link_and_wait_node() will propagate the existing head from the old to the new tail node. Signed-off-by: Waiman Long --- arch/x86/include/asm/paravirt.h | 22 ++ arch/x86/include/asm/paravirt_types.h | 27 ++ arch/x86/include/asm/pvqspinlock.h| 426 + arch/x86/include/asm/qspinlock.h | 71 ++- arch/x86/kernel/paravirt-spinlocks.c |6 + include/asm-generic/qspinlock.h |2 + kernel/locking/qspinlock.c| 70 +- 7 files changed, 617 insertions(+), 7 deletions(-) create mode 100644 arch/x86/include/asm/pvqspinlock.h diff --git a/arch/x86/include/asm/paravirt.h b/arch/x86/include/asm/paravirt.h index 32444ae..628a79f 100644 --- a/arch/x86/include/asm/paravirt.h +++ b/arch/x86/include/asm/paravirt.h @@ -712,6 +712,27 @@ static inline void __set_fixmap(unsigned /* enum fixed_addresses */ idx, #if defined(CONFIG_SMP) && defined(CONFIG_PARAVIRT_SPINLOCKS) +#ifdef CONFIG_QUEUE_SPINLOCK + +static __always_inline void pv_kick_cpu(int cpu) +{ + PVOP_VCALL1(pv_lock_ops.kick_cpu, cpu); +} + +/* + * Return 0 if CPU has been halted or -1 if aborted. + */ +static __always_inline int pv_lockwait(u8 *byte, u8 val) +{ + return PVOP_CALLEE2(int, pv_lock_ops.lockwait, byte, val); +} + +static __always_inline void pv_lockstat(int stat_types) +{ + PVOP_VCALLEE1(pv_lock_ops.lockstat, stat_types); +} + +#else static __always_inline void __ticket_lock_spinning(struct arch_spinlock *lock, __ticket_t ticket) { @@ -723,6 +744,7 @@ static __always_inline void __ticket_unlock_kick(struct arch_spinlock *lock, { PVOP_VCALL2(pv_lock_ops.unlock_kick, lock, ticket); } +#endif #endif diff --git a/arch/x86/include/asm/paravirt_types.h b/arch/x86/include/asm/paravirt_types.h index 7549b8b..b4b4065 100644 --- a/arch/x86/include/asm/paravirt_types.h +++ b/arch/x86/include/asm/paravirt_types.h @@ -326,6 +326,9 @@ struct pv_mmu_ops { phys_addr_t phys, pgprot_t flags); }; +struct mcs_spinlock; +struct qspinlock; + struct arch_spinlock; #ifdef CONFIG_SMP #include @@ -333,9 +336,33 @@ struct arch_spinlock; typedef u16 __ticket_t; #endif +#ifdef CONFIG_QUEUE_SPINLOCK +enum pv_lock_stats { + _PV_HALT_QHEAD, /* Queue head halting */ + _PV_HALT_QNODE, /* Other queue node halting */ + _PV_HALT_ABORT, /* Halting aborted */ + _PV_WAKE_KICKED,/* Wakeup by kicking*/ + _PV_WAKE_SPURIOUS, /* Spurious wakeup */ + _PV_KICK_NOHALT /* Kick but CPU not halted */ +}; + +#define PV_LOCKSTAT_HALT_QHEAD (1 << _PV_HALT_QHEAD) +#define PV_LOCKSTAT_HALT_QNODE (1 << _PV_HALT_QNODE) +#define PV_LOCKSTAT_HALT_ABORT (1 << _PV_HALT_ABORT) +#define PV_LOCKSTAT_WAKE_KICKED(1 << _PV_WAKE_KICKED) +#define PV_LOCKSTAT_WAKE_SPURIOUS (1 << _PV_WAKE_SPURIOUS) +#define PV_LOCKSTAT_KICK_NOHALT(1 << _PV_KICK_NOHALT) +#endif + struct pv_l
[PATCH v14 08/11] qspinlock, x86: Rename paravirt_ticketlocks_enabled
This patch renames the paravirt_ticketlocks_enabled static key to a more generic paravirt_spinlocks_enabled name. Signed-off-by: Waiman Long Signed-off-by: Peter Zijlstra --- arch/x86/include/asm/spinlock.h |4 ++-- arch/x86/kernel/kvm.c|2 +- arch/x86/kernel/paravirt-spinlocks.c |4 ++-- arch/x86/xen/spinlock.c |2 +- 4 files changed, 6 insertions(+), 6 deletions(-) diff --git a/arch/x86/include/asm/spinlock.h b/arch/x86/include/asm/spinlock.h index 5a5b90c..de9ecbe 100644 --- a/arch/x86/include/asm/spinlock.h +++ b/arch/x86/include/asm/spinlock.h @@ -39,7 +39,7 @@ /* How long a lock should spin before we consider blocking */ #define SPIN_THRESHOLD (1 << 15) -extern struct static_key paravirt_ticketlocks_enabled; +extern struct static_key paravirt_spinlocks_enabled; static __always_inline bool static_key_false(struct static_key *key); #ifdef CONFIG_QUEUE_SPINLOCK @@ -150,7 +150,7 @@ static inline void __ticket_unlock_slowpath(arch_spinlock_t *lock, static __always_inline void arch_spin_unlock(arch_spinlock_t *lock) { if (TICKET_SLOWPATH_FLAG && - static_key_false(¶virt_ticketlocks_enabled)) { + static_key_false(¶virt_spinlocks_enabled)) { arch_spinlock_t prev; prev = *lock; diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c index 94f6434..2f1bcc9 100644 --- a/arch/x86/kernel/kvm.c +++ b/arch/x86/kernel/kvm.c @@ -834,7 +834,7 @@ static __init int kvm_spinlock_init_jump(void) if (!kvm_para_has_feature(KVM_FEATURE_PV_UNHALT)) return 0; - static_key_slow_inc(¶virt_ticketlocks_enabled); + static_key_slow_inc(¶virt_spinlocks_enabled); printk(KERN_INFO "KVM setup paravirtual spinlock\n"); return 0; diff --git a/arch/x86/kernel/paravirt-spinlocks.c b/arch/x86/kernel/paravirt-spinlocks.c index bbb6c73..e434f24 100644 --- a/arch/x86/kernel/paravirt-spinlocks.c +++ b/arch/x86/kernel/paravirt-spinlocks.c @@ -16,5 +16,5 @@ struct pv_lock_ops pv_lock_ops = { }; EXPORT_SYMBOL(pv_lock_ops); -struct static_key paravirt_ticketlocks_enabled = STATIC_KEY_INIT_FALSE; -EXPORT_SYMBOL(paravirt_ticketlocks_enabled); +struct static_key paravirt_spinlocks_enabled = STATIC_KEY_INIT_FALSE; +EXPORT_SYMBOL(paravirt_spinlocks_enabled); diff --git a/arch/x86/xen/spinlock.c b/arch/x86/xen/spinlock.c index 23b45eb..d332ae0 100644 --- a/arch/x86/xen/spinlock.c +++ b/arch/x86/xen/spinlock.c @@ -293,7 +293,7 @@ static __init int xen_init_spinlocks_jump(void) if (!xen_domain()) return 0; - static_key_slow_inc(¶virt_ticketlocks_enabled); + static_key_slow_inc(¶virt_spinlocks_enabled); return 0; } early_initcall(xen_init_spinlocks_jump); -- 1.7.1 -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[PATCH v14 11/11] pvqspinlock, x86: Enable PV qspinlock for XEN
This patch adds the necessary XEN specific code to allow XEN to support the CPU halting and kicking operations needed by the queue spinlock PV code. Signed-off-by: Waiman Long --- arch/x86/xen/spinlock.c | 149 +-- kernel/Kconfig.locks|2 +- 2 files changed, 145 insertions(+), 6 deletions(-) diff --git a/arch/x86/xen/spinlock.c b/arch/x86/xen/spinlock.c index d332ae0..60e444c 100644 --- a/arch/x86/xen/spinlock.c +++ b/arch/x86/xen/spinlock.c @@ -17,6 +17,12 @@ #include "xen-ops.h" #include "debugfs.h" +static DEFINE_PER_CPU(int, lock_kicker_irq) = -1; +static DEFINE_PER_CPU(char *, irq_name); +static bool xen_pvspin = true; + +#ifndef CONFIG_QUEUE_SPINLOCK + enum xen_contention_stat { TAKEN_SLOW, TAKEN_SLOW_PICKUP, @@ -100,12 +106,9 @@ struct xen_lock_waiting { __ticket_t want; }; -static DEFINE_PER_CPU(int, lock_kicker_irq) = -1; -static DEFINE_PER_CPU(char *, irq_name); static DEFINE_PER_CPU(struct xen_lock_waiting, lock_waiting); static cpumask_t waiting_cpus; -static bool xen_pvspin = true; __visible void xen_lock_spinning(struct arch_spinlock *lock, __ticket_t want) { int irq = __this_cpu_read(lock_kicker_irq); @@ -213,6 +216,118 @@ static void xen_unlock_kick(struct arch_spinlock *lock, __ticket_t next) } } +#else /* CONFIG_QUEUE_SPINLOCK */ + +#ifdef CONFIG_XEN_DEBUG_FS +static u32 kick_nohlt_stats; /* Kick but not halt count */ +static u32 halt_qhead_stats; /* Queue head halting count */ +static u32 halt_qnode_stats; /* Queue node halting count */ +static u32 halt_abort_stats; /* Halting abort count */ +static u32 wake_kick_stats;/* Wakeup by kicking count */ +static u32 wake_spur_stats;/* Spurious wakeup count*/ +static u64 time_blocked; /* Total blocking time */ + +void xen_lock_stats(int stat_types) +{ + if (stat_types & PV_LOCKSTAT_WAKE_KICKED) + add_smp(&wake_kick_stats, 1); + if (stat_types & PV_LOCKSTAT_WAKE_SPURIOUS) + add_smp(&wake_spur_stats, 1); + if (stat_types & PV_LOCKSTAT_KICK_NOHALT) + add_smp(&kick_nohlt_stats, 1); + if (stat_types & PV_LOCKSTAT_HALT_QHEAD) + add_smp(&halt_qhead_stats, 1); + if (stat_types & PV_LOCKSTAT_HALT_QNODE) + add_smp(&halt_qnode_stats, 1); + if (stat_types & PV_LOCKSTAT_HALT_ABORT) + add_smp(&halt_abort_stats, 1); +} +PV_CALLEE_SAVE_REGS_THUNK(xen_lock_stats); + +static inline u64 spin_time_start(void) +{ + return sched_clock(); +} + +static inline void spin_time_accum_blocked(u64 start) +{ + u64 delta; + + delta = sched_clock() - start; + add_smp(&time_blocked, delta); +} +#else /* CONFIG_XEN_DEBUG_FS */ +static inline void xen_lock_stats(int stat_types) +{ +} + +static inline u64 spin_time_start(void) +{ + return 0; +} + +static inline void spin_time_accum_blocked(u64 start) +{ +} +#endif /* CONFIG_XEN_DEBUG_FS */ + +void xen_kick_cpu(int cpu) +{ + xen_send_IPI_one(cpu, XEN_SPIN_UNLOCK_VECTOR); +} +PV_CALLEE_SAVE_REGS_THUNK(xen_kick_cpu); + +/* + * Halt the current CPU & release it back to the host + * Return 0 if halted, -1 otherwise. + */ +int xen_halt_cpu(u8 *byte, u8 val) +{ + int irq = __this_cpu_read(lock_kicker_irq); + unsigned long flags; + u64 start; + + /* If kicker interrupts not initialized yet, just spin */ + if (irq == -1) + return -1; + + /* +* Make sure an interrupt handler can't upset things in a +* partially setup state. +*/ + local_irq_save(flags); + start = spin_time_start(); + + /* clear pending */ + xen_clear_irq_pending(irq); + + /* Allow interrupts while blocked */ + local_irq_restore(flags); + /* +* Don't halt if the content of the given byte address differs from +* the expected value. A read memory barrier is added to make sure that +* the latest value of the byte address is fetched. +*/ + smp_rmb(); + if (*byte != val) { + xen_lock_stats(PV_LOCKSTAT_HALT_ABORT); + return -1; + } + /* +* If an interrupt happens here, it will leave the wakeup irq +* pending, which will cause xen_poll_irq() to return +* immediately. +*/ + + /* Block until irq becomes pending (or perhaps a spurious wakeup) */ + xen_poll_irq(irq); + spin_time_accum_blocked(start); + return 0; +} +PV_CALLEE_SAVE_REGS_THUNK(xen_halt_cpu); + +#endif /* CONFIG_QUEUE_SPINLOCK */ + static irqreturn_t dummy_handler(int irq, void *dev_id) { BUG(); @@ -258,7 +373,6 @@ void xen_uninit_lock_cpu(int cpu) per_cpu(irq_name, cpu) = NULL; } - /* * Our init of PV spinlocks is split in two init functions due to us * using
[PATCH v14 10/11] pvqspinlock, x86: Enable PV qspinlock for KVM
This patch adds the necessary KVM specific code to allow KVM to support the CPU halting and kicking operations needed by the queue spinlock PV code. Two KVM guests of 20 CPU cores (2 nodes) were created for performance testing in one of the following three configurations: 1) Only 1 VM is active 2) Both VMs are active and they share the same 20 physical CPUs (200% overcommit) The tests run included the disk workload of the AIM7 benchmark on both ext4 and xfs RAM disks at 3000 users on a 3.17 based kernel. The "ebizzy -m" test and futextest was was also run and its performance data were recorded. With two VMs running, the "idle=poll" kernel option was added to simulate a busy guest. If PV qspinlock is not enabled, unfairlock will be used automically in a guest. AIM7 XFS Disk Test (no overcommit) kernel JPMReal Time Sys TimeUsr Time - ---- PV ticketlock 25423737.08 98.95 5.44 PV qspinlock 25495757.06 98.63 5.40 unfairlock26162796.91 97.05 5.42 AIM7 XFS Disk Test (200% overcommit) kernel JPMReal Time Sys TimeUsr Time - ---- PV ticketlock 64446827.93 415.22 6.33 PV qspinlock 64562427.88 419.84 0.39 unfairlock69551825.88 377.40 4.09 AIM7 EXT4 Disk Test (no overcommit) kernel JPMReal Time Sys TimeUsr Time - ---- PV ticketlock 19955659.02 103.67 5.76 PV qspinlock 20111738.95 102.15 5.40 unfairlock20665908.71 98.13 5.46 AIM7 EXT4 Disk Test (200% overcommit) kernel JPMReal Time Sys TimeUsr Time - ---- PV ticketlock 47834137.63 495.81 30.78 PV qspinlock 47405837.97 475.74 30.95 unfairlock56022432.13 398.43 26.27 For the AIM7 disk workload, both PV ticketlock and qspinlock have about the same performance. The unfairlock performs slightly better than the PV lock. EBIZZY-m Test (no overcommit) kernelRec/s Real Time Sys TimeUsr Time - - - PV ticketlock 3255 10.00 60.65 3.62 PV qspinlock 3318 10.00 54.27 3.60 unfairlock2833 10.00 26.66 3.09 EBIZZY-m Test (200% overcommit) kernelRec/s Real Time Sys TimeUsr Time - - - PV ticketlock 841 10.00 71.03 2.37 PV qspinlock 834 10.00 68.27 2.39 unfairlock 865 10.00 27.08 1.51 futextest (no overcommit) kernel kops/s --- PV ticketlock11523 PV qspinlock 12328 unfairlock9478 futextest (200% overcommit) kernel kops/s --- PV ticketlock 7276 PV qspinlock 7095 unfairlock5614 The ebizzy and futextest have much higher spinlock contention than Additional kernel build tests had been done on an 8-socket 32-core Westmere-EX system (HT off) with an overcommitted KVM PV guest of 60 vCPUs with no NUMA awareness. The kernels is 3.18-2 based. The build times of a 3.17 kernel with "make -j 60" are shown in the table below: kernel Elapsed time User time Sys time --- PV ticketlock 16m57s 219m17s 189m30s PV qspinlock 14m47s 188m50s 177m14s There is a 13% reduction in build time. This is probably caused by the fact that contended qspinlock produces much less cacheline contention than contended ticket spinlock and the test system is an 8-socket server. Signed-off-by: Waiman Long --- arch/x86/kernel/kvm.c | 143 - kernel/Kconfig.locks |2 +- 2 files changed, 143 insertions(+), 2 deletions(-) diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c index 2f1bcc9..29a312d 100644 --- a/arch/x86/kernel/kvm.c +++ b/arch/x86/kernel/kvm.c @@ -575,7 +575,7 @@ arch_initcall(activate_jump_labels); #ifdef CONFIG_PARAVIRT_SPINLOCKS /* Kick a cpu by its apicid. Used to wake up a halted vcpu */ -static void kvm_kick_cpu(int cpu) +void kvm_kick_cpu(int cpu) { int apicid; unsigned long flags = 0; @@ -583,7 +583,9 @@ static void kvm_kick_cpu(int c
Re: KVM: x86: workaround SuSE's 2.6.16 pvclock vs masterclock issue
On 20/01/2015 18:54, Marcelo Tosatti wrote: > > SuSE's 2.6.16 kernel fails to boot if the delta between tsc_timestamp > and rdtsc is larger than a given threshold: > > * If we get more than the below threshold into the future, we rerequest > * the real time from the host again which has only little offset then > * that we need to adjust using the TSC. > * > * For now that threshold is 1/5th of a jiffie. That should be good > * enough accuracy for completely broken systems, but also give us swing > * to not call out to the host all the time. > */ > #define PVCLOCK_DELTA_MAX ((10ULL / HZ) / 5) > > Disable masterclock support (which increases said delta) in case the > boot vcpu does not use MSR_KVM_SYSTEM_TIME_NEW. Makes sense. Applied to queue, thanks. Paolo > Upstreams kernels which support pvclock vsyscalls (and therefore make > use of PVCLOCK_STABLE_BIT) use MSR_KVM_SYSTEM_TIME_NEW. > > Signed-off-by: Marcelo Tosatti > > diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h > index 7c492ed..9a099f6 100644 > --- a/arch/x86/include/asm/kvm_host.h > +++ b/arch/x86/include/asm/kvm_host.h > @@ -613,6 +613,8 @@ struct kvm_arch { > #ifdef CONFIG_KVM_MMU_AUDIT > int audit_point; > #endif > + > + bool boot_vcpu_runs_old_kvmclock; > }; > > struct kvm_vm_stat { > diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c > index 8f1e22d..1d8a4f6 100644 > --- a/arch/x86/kvm/x86.c > +++ b/arch/x86/kvm/x86.c > @@ -1494,7 +1494,8 @@ static void pvclock_update_vm_gtod_copy(struct kvm *kvm) > &ka->master_cycle_now); > > ka->use_master_clock = host_tsc_clocksource && vcpus_matched > - && !backwards_tsc_observed; > + && !backwards_tsc_observed > + && !ka->boot_vcpu_runs_old_kvmclock; > > if (ka->use_master_clock) > atomic_set(&kvm_guest_has_master_clock, 1); > @@ -2106,8 +2107,20 @@ int kvm_set_msr_common(struct kvm_vcpu *vcpu, struct > msr_data *msr_info) > case MSR_KVM_SYSTEM_TIME_NEW: > case MSR_KVM_SYSTEM_TIME: { > u64 gpa_offset; > + struct kvm_arch *ka = &vcpu->kvm->arch; > + > kvmclock_reset(vcpu); > > + if (vcpu->vcpu_id == 0 && !msr_info->host_initiated) { > + bool tmp = (msr == MSR_KVM_SYSTEM_TIME); > + > + if (ka->boot_vcpu_runs_old_kvmclock != tmp) > + set_bit(KVM_REQ_MASTERCLOCK_UPDATE, > + &vcpu->requests); > + > + ka->boot_vcpu_runs_old_kvmclock = tmp; > + } > + > vcpu->arch.time = data; > kvm_make_request(KVM_REQ_GLOBAL_CLOCK_UPDATE, vcpu); > > -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [patch -rt 1/2] KVM: use simple waitqueue for vcpu->wq
On Tue, 20 Jan 2015 16:46:53 +1100 Paul Mackerras wrote: > On Mon, Jan 19, 2015 at 12:41:00PM -0200, Marcelo Tosatti wrote: > > On Fri, Jan 16, 2015 at 11:48:46AM -0500, Steven Rostedt wrote: > > > > static void kvmppc_vcore_blocked(struct kvmppc_vcore *vc) > > > > { > > > > - DEFINE_WAIT(wait); > > > > + DEFINE_SWAITER(wait); > > > > > > > > - prepare_to_wait(&vc->wq, &wait, TASK_INTERRUPTIBLE); > > > > + swait_prepare(&vc->wq, &wait, TASK_INTERRUPTIBLE); > > > > vc->vcore_state = VCORE_SLEEPING; > > > > spin_unlock(&vc->lock); > > > > schedule(); > > > > - finish_wait(&vc->wq, &wait); > > > > + swait_finish(&vc->wq, &wait); > > > > spin_lock(&vc->lock); > > > > vc->vcore_state = VCORE_INACTIVE; > > > > } > > > > @@ -1613,7 +1613,7 @@ > > > > kvmppc_create_dtl_entry(vcpu, vc); > > > > kvmppc_start_thread(vcpu); > > > > } else if (vc->vcore_state == VCORE_SLEEPING) { > > > > - wake_up(&vc->wq); > > > > + swait_wake(&vc->wq); > > > > > > I notice everywhere you have a swait_wake_interruptible() but here. Is > > > there a reason why? > > > > > > IIRC, Peter wants to make swait wakeup usage homogenous. That is, you > > > either sleep in an interruptible state, or you don't. You can't mix and > > > match it. > > > > IIUC there is only one waiter on this waitqueue at any given time. > > > > Paul is that correct? > > Yes, that's right. It's only the task that has taken the > responsibility for running the virtual core that would be waiting on > that wait queue. Thanks Paul, but it still makes me nervious. I'm actually wondering if we should just nuke the _interruptible() version of swait. As it should only be all interruptible or all not interruptible, that the swait_wake() should just do the wake up regardless. In which case, swait_wake() is good enough. No need to have different versions where people may think do something special. Peter? -- Steve -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
KVM: x86: workaround SuSE's 2.6.16 pvclock vs masterclock issue
SuSE's 2.6.16 kernel fails to boot if the delta between tsc_timestamp and rdtsc is larger than a given threshold: * If we get more than the below threshold into the future, we rerequest * the real time from the host again which has only little offset then * that we need to adjust using the TSC. * * For now that threshold is 1/5th of a jiffie. That should be good * enough accuracy for completely broken systems, but also give us swing * to not call out to the host all the time. */ #define PVCLOCK_DELTA_MAX ((10ULL / HZ) / 5) Disable masterclock support (which increases said delta) in case the boot vcpu does not use MSR_KVM_SYSTEM_TIME_NEW. Upstreams kernels which support pvclock vsyscalls (and therefore make use of PVCLOCK_STABLE_BIT) use MSR_KVM_SYSTEM_TIME_NEW. Signed-off-by: Marcelo Tosatti diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 7c492ed..9a099f6 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -613,6 +613,8 @@ struct kvm_arch { #ifdef CONFIG_KVM_MMU_AUDIT int audit_point; #endif + + bool boot_vcpu_runs_old_kvmclock; }; struct kvm_vm_stat { diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 8f1e22d..1d8a4f6 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -1494,7 +1494,8 @@ static void pvclock_update_vm_gtod_copy(struct kvm *kvm) &ka->master_cycle_now); ka->use_master_clock = host_tsc_clocksource && vcpus_matched - && !backwards_tsc_observed; + && !backwards_tsc_observed + && !ka->boot_vcpu_runs_old_kvmclock; if (ka->use_master_clock) atomic_set(&kvm_guest_has_master_clock, 1); @@ -2106,8 +2107,20 @@ int kvm_set_msr_common(struct kvm_vcpu *vcpu, struct msr_data *msr_info) case MSR_KVM_SYSTEM_TIME_NEW: case MSR_KVM_SYSTEM_TIME: { u64 gpa_offset; + struct kvm_arch *ka = &vcpu->kvm->arch; + kvmclock_reset(vcpu); + if (vcpu->vcpu_id == 0 && !msr_info->host_initiated) { + bool tmp = (msr == MSR_KVM_SYSTEM_TIME); + + if (ka->boot_vcpu_runs_old_kvmclock != tmp) + set_bit(KVM_REQ_MASTERCLOCK_UPDATE, + &vcpu->requests); + + ka->boot_vcpu_runs_old_kvmclock = tmp; + } + vcpu->arch.time = data; kvm_make_request(KVM_REQ_GLOBAL_CLOCK_UPDATE, vcpu); -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH kvm-unit-tests 0/2] x86: do not use MMIO interface under x2APIC
On 20/01/2015 17:35, Radim Krčmář wrote: > We failed smptest [1/2], but didn't notice [2/2]. > > APICv returns 0 on APIC MMIO reads under x2APIC, > KVM returns real APIC ID instead. > > Do we want to test this weirdness in another unit test? > (I'll post a patch for KVM after checking that there aren't quirks.) > > > Radim Krčmář (2): > lib/x86: fix apic_id() under x2APIC > x86: smptest: use report > > lib/x86/apic.c | 2 +- > x86/smptest.c | 8 +++- > 2 files changed, 8 insertions(+), 2 deletions(-) > Thanks, applying. Paolo -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[PATCH kvm-unit-tests 1/2] lib/x86: fix apic_id() under x2APIC
We used MMIO (xAPIC) for x2APIC. This is the same as using xAPIC in globally disabled mode. Signed-off-by: Radim Krčmář --- lib/x86/apic.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/lib/x86/apic.c b/lib/x86/apic.c index 6876d85fac95..9c42c4d0a4fc 100644 --- a/lib/x86/apic.c +++ b/lib/x86/apic.c @@ -70,7 +70,7 @@ static void x2apic_icr_write(u32 val, u32 dest) static uint32_t x2apic_id(void) { -return xapic_read(APIC_ID); +return x2apic_read(APIC_ID); } static const struct apic_ops x2apic_ops = { -- 2.2.1 -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[PATCH kvm-unit-tests 2/2] x86: smptest: use report
smptest was failing and we didn't notice, turn it into unit test. Signed-off-by: Radim Krčmář --- x86/smptest.c | 8 +++- 1 file changed, 7 insertions(+), 1 deletion(-) diff --git a/x86/smptest.c b/x86/smptest.c index 37805999b3b0..acda22e18314 100644 --- a/x86/smptest.c +++ b/x86/smptest.c @@ -1,6 +1,8 @@ #include "libcflat.h" #include "smp.h" +unsigned nipis; + static void ipi_test(void *data) { int n = (long)data; @@ -8,6 +10,8 @@ static void ipi_test(void *data) printf("ipi called, cpu %d\n", n); if (n != smp_id()) printf("but wrong cpu %d\n", smp_id()); +else +nipis++; } int main() @@ -21,5 +25,7 @@ int main() printf("found %d cpus\n", ncpus); for (i = 0; i < ncpus; ++i) on_cpu(i, ipi_test, (void *)(long)i); -return 0; + +report("IPI to each CPU", nipis == ncpus); +return report_summary(); } -- 2.2.1 -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[PATCH kvm-unit-tests 0/2] x86: do not use MMIO interface under x2APIC
We failed smptest [1/2], but didn't notice [2/2]. APICv returns 0 on APIC MMIO reads under x2APIC, KVM returns real APIC ID instead. Do we want to test this weirdness in another unit test? (I'll post a patch for KVM after checking that there aren't quirks.) Radim Krčmář (2): lib/x86: fix apic_id() under x2APIC x86: smptest: use report lib/x86/apic.c | 2 +- x86/smptest.c | 8 +++- 2 files changed, 8 insertions(+), 2 deletions(-) -- 2.2.1 -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH kvm-unit-tests 0/2] x86: remove tscdeadline_latency from unittest.cfg
On 20/01/2015 15:27, Radim Krčmář wrote: > unittest.cfg expects tscdeadline_latency.flat, while the existing > filename contains a dash => we don't call it from ./run_tests.sh. > (Which is considered a PASS :) > > tscdeadline_latency isn't a unit test so we can omit it. > (Feel free to drop [1/2], I just loathe inconsistencies.) Applied, thanks. Paolo -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: KVM call for agenda for 2015-01-20
Juan Quintela wrote: > Hi > > Please, send any topic that you are interested in covering. > > Thanks, Juan. > > Call details: > > By popular demand, a google calendar public entry with it As there are no agenda, call gets cancelled. Sorry for the late notification. > > > https://www.google.com/calendar/embed?src=dG9iMXRqcXAzN3Y4ZXZwNzRoMHE4a3BqcXNAZ3JvdXAuY2FsZW5kYXIuZ29vZ2xlLmNvbQ > > (Let me know if you have any problems with the calendar entry. I just > gave up about getting right at the same time CEST, CET, EDT and DST). > > If you need phone number details, contact me privately > > Thanks, Juan. -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[PATCH kvm-unit-tests 2/2] x86: remove tscdeadline_latency from unittest.cfg
There is no point in executing it through run_tests.sh. Signed-off-by: Radim Krčmář --- x86/unittests.cfg | 5 - 1 file changed, 5 deletions(-) diff --git a/x86/unittests.cfg b/x86/unittests.cfg index 75b959535c01..badb08ad138e 100644 --- a/x86/unittests.cfg +++ b/x86/unittests.cfg @@ -161,8 +161,3 @@ arch = x86_64 [debug] file = debug.flat arch = x86_64 - -[tscdeadline_latency] -file = tscdeadline_latency.flat -extra_params = -cpu qemu64,+tsc-deadline -arch = x86_64 -- 2.2.1 -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[PATCH kvm-unit-tests 1/2] x86: use underline in tscdeadline-latency filename
x86/unittests.cfg uses the variant with underscore. Rename tscdeadline-latency.c instead of fixing x86/unittests.cfg because we use only underscores elsewhere. Signed-off-by: Radim Krčmář --- config/config-x86-common.mak | 2 +- config/config-x86_64.mak | 2 +- x86/{tscdeadline-latency.c => tscdeadline_latency.c} | 0 3 files changed, 2 insertions(+), 2 deletions(-) rename x86/{tscdeadline-latency.c => tscdeadline_latency.c} (100%) diff --git a/config/config-x86-common.mak b/config/config-x86-common.mak index e016529dc110..8fc3490bd8bd 100644 --- a/config/config-x86-common.mak +++ b/config/config-x86-common.mak @@ -69,7 +69,7 @@ $(TEST_DIR)/tsc_adjust.elf: $(cstart.o) $(TEST_DIR)/tsc_adjust.o $(TEST_DIR)/apic.elf: $(cstart.o) $(TEST_DIR)/apic.o -$(TEST_DIR)/tscdeadline-latency.elf: $(cstart.o) $(TEST_DIR)/tscdeadline-latency.o +$(TEST_DIR)/tscdeadline_latency.elf: $(cstart.o) $(TEST_DIR)/tscdeadline_latency.o $(TEST_DIR)/init.elf: $(cstart.o) $(TEST_DIR)/init.o diff --git a/config/config-x86_64.mak b/config/config-x86_64.mak index dbbdaf5c76f3..d62c1e9ae42d 100644 --- a/config/config-x86_64.mak +++ b/config/config-x86_64.mak @@ -9,6 +9,6 @@ tests = $(TEST_DIR)/access.flat $(TEST_DIR)/apic.flat \ $(TEST_DIR)/pcid.flat $(TEST_DIR)/debug.flat tests += $(TEST_DIR)/svm.flat tests += $(TEST_DIR)/vmx.flat -tests += $(TEST_DIR)/tscdeadline-latency.flat +tests += $(TEST_DIR)/tscdeadline_latency.flat include config/config-x86-common.mak diff --git a/x86/tscdeadline-latency.c b/x86/tscdeadline_latency.c similarity index 100% rename from x86/tscdeadline-latency.c rename to x86/tscdeadline_latency.c -- 2.2.1 -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[PATCH kvm-unit-tests 0/2] x86: remove tscdeadline_latency from unittest.cfg
unittest.cfg expects tscdeadline_latency.flat, while the existing filename contains a dash => we don't call it from ./run_tests.sh. (Which is considered a PASS :) tscdeadline_latency isn't a unit test so we can omit it. (Feel free to drop [1/2], I just loathe inconsistencies.) Radim Krčmář (2): x86: use underline in tscdeadline-latency filename x86: remove tscdeadline_latency from unittest.cfg config/config-x86-common.mak | 2 +- config/config-x86_64.mak | 2 +- x86/{tscdeadline-latency.c => tscdeadline_latency.c} | 0 x86/unittests.cfg| 5 - 4 files changed, 2 insertions(+), 7 deletions(-) rename x86/{tscdeadline-latency.c => tscdeadline_latency.c} (100%) -- 2.2.1 -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH 1/1] arch/x86/kvm/vmx.c: Fix external interrupts inject directly bug with guestos RFLAGS.IF=0
On 20/01/2015 11:34, Li Kaihang wrote: > Li kaihang: I think I make a mistake here that IDT-vectoring information > field is not written by vectored event but is done by Event Delivery. > vm exit during Event Delivery is not triggered by external > interrupt delivery, only vm exit due to vectored event is done so. > Both are completely different, and you are right. I'm very sorry > this patch is wrong. No problem! Paolo -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH 1/1] arch/x86/kvm/vmx.c: Fix external interrupts inject directly bug with guestos RFLAGS.IF=0
From: Paolo Bonzini To: Li Kaihang , g...@kernel.org, Cc: t...@linutronix.de, mi...@redhat.com, h...@zytor.com, x...@kernel.org, kvm@vger.kernel.org, linux-ker...@vger.kernel.org Date: 2015-01-19 下午 11:29 Subject:Re: [PATCH 1/1] arch/x86/kvm/vmx.c: Fix external interrupts inject directly bug with guestos RFLAGS.IF=0 On 15/01/2015 13:36, Li Kaihang wrote: > This patch fix a external interrupt injecting bug in linux 3.19-rc4. > > GuestOS is running and handling some interrupt with RFLAGS.IF = 0 while a > external interrupt coming, > then can lead to a vm exit,in this case,we must avoid inject this external > interrupt or it will generate > a processor hardware exception causing virtual machine crash. I do not understand what is happening here. Between the time the processor starts delivering an external interrupt to the VM, and the time it decides to do a vm exit because of an external interrupt in the host, IF becomes 0. What is the cause of the external interrupt? Why does IF become 0? > Now, I show more details about this problem: > > A general external interrupt processing for a running virtual machine is > shown in the following: > > Step 1: > a ext intr gen a vm_exit How did the external interrupt cause the IDT-vectoring information field to be set? External interrupts for the host are not among the causes listed in "27.2.3 Information for VM Exits During Event Delivery". > --> vmx_complete_interrupts --> __vmx_complete_interrupts --> case > INTR_TYPE_EXT_INR: kvm_queue_interrupt(vcpu, vector, type == > INTR_TYPE_SOFT_INTR); > > Step 2: > kvm_x86_ops->handle_external_intr(vcpu); Why is this relevant? The external interrupt is a vectored event, so it sets VM-exit interruption information (27.2.2 Information for VM Exits Due to Vectored Events). It doesn't set the IDT-vectoring information field. Li kaihang: I think I make a mistake here that IDT-vectoring information field is not written by vectored event but is done by Event Delivery. vm exit during Event Delivery is not triggered by external interrupt delivery, only vm exit due to vectored event is done so. Both are completely different, and you are right. I'm very sorry this patch is wrong. Paolo > Step 3: > get back to vcpu_enter_guest after a while cycle,then run > inject_pending_event > > Step 4: > if (vcpu->arch.interrupt.pending) { >kvm_x86_ops->set_irq(vcpu); >return 0; >} > > Step 5: > kvm_x86_ops->run(vcpu) --> vm_entry inject vector to guestos IDT > > for the above steps, step 4 and 5 will be a processor hardware exception if > step1 happen while guestos RFLAGS.IF = 0, that is to say, guestos interrupt > is disabled. > So we should add a logic to judge in step 1 whether a external interrupt need > to be pended then inject directly, in the process, we don't need to worry > about > this external interrupt lost because the next Step 2 will handle and choose a > best chance to inject it by virtual interrupt controller. ZTE Information Security Notice: The information contained in this mail (and any attachment transmitted herewith) is privileged and confidential and is intended for the exclusive use of the addressee(s). If you are not an intended recipient, any disclosure, reproduction, distribution or other dissemination or use of the information contained is strictly prohibited. If you have received this mail in error, please delete it and notify us immediately.
Re: [PATCH v2 5/5] KVM: nVMX: Enable nested posted interrupt processing.
On Tue, Jan 20, 2015 at 5:54 PM, Paolo Bonzini wrote: > > > On 20/01/2015 09:48, Wincy Van wrote: >> +static int vmx_deliver_nested_posted_interrupt(struct kvm_vcpu *vcpu, >> + int vector) >> +{ >> + int r = 0; >> + struct vmcs12 *vmcs12; >> + >> + /* >> +* Since posted intr delivery is async, >> +* we must aquire a spin-lock to avoid >> +* the race of vmcs12. >> +*/ >> + spin_lock(&to_vmx(vcpu)->nested.vmcs12_lock); >> + vmcs12 = get_vmcs12(vcpu); >> + if (!is_guest_mode(vcpu) || !vmcs12) { >> + r = -1; >> + goto out; >> + } > > is_guest_mode should be checked first outside the lock, to avoid > affecting the non-nested fast path. You can then recheck it later > inside the lock. Agreed, will do. > > Another way to avoid the spinlock: in prepare_vmcs02 or a similar place, > you can save vmcs12->posted_intr_nv in a new field > vmx->nested.posted_intr_nv; just set it to -1 if > !nested_cpu_has_posted_intr(vmcs12). In vmclear, again you just set the > field to -1, and here you can do > > if (!is_guest_mode(vcpu) || > vector != to_vmx(vcpu)->nested.posted_intr_nv) { > r = -1; > goto out; > } > > You don't need to access vmcs12, and while there is a race, it's okay > because there is no pointer access. That's a good idea. I will apply it to the next version. > >> >> + if (vcpu->mode == IN_GUEST_MODE) >> + apic->send_IPI_mask(get_cpu_mask(vcpu->cpu), >> + POSTED_INTR_VECTOR); > > Please add a comment that PIR and ON have been set by the L1 hypervisor. Will do. > > I'll do a full review the other patches as soon as possible. > Thank you, I will send v3 after it is done. Wincy -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH v2 5/5] KVM: nVMX: Enable nested posted interrupt processing.
On 20/01/2015 09:48, Wincy Van wrote: > +static int vmx_deliver_nested_posted_interrupt(struct kvm_vcpu *vcpu, > + int vector) > +{ > + int r = 0; > + struct vmcs12 *vmcs12; > + > + /* > +* Since posted intr delivery is async, > +* we must aquire a spin-lock to avoid > +* the race of vmcs12. > +*/ > + spin_lock(&to_vmx(vcpu)->nested.vmcs12_lock); > + vmcs12 = get_vmcs12(vcpu); > + if (!is_guest_mode(vcpu) || !vmcs12) { > + r = -1; > + goto out; > + } is_guest_mode should be checked first outside the lock, to avoid affecting the non-nested fast path. You can then recheck it later inside the lock. Another way to avoid the spinlock: in prepare_vmcs02 or a similar place, you can save vmcs12->posted_intr_nv in a new field vmx->nested.posted_intr_nv; just set it to -1 if !nested_cpu_has_posted_intr(vmcs12). In vmclear, again you just set the field to -1, and here you can do if (!is_guest_mode(vcpu) || vector != to_vmx(vcpu)->nested.posted_intr_nv) { r = -1; goto out; } You don't need to access vmcs12, and while there is a race, it's okay because there is no pointer access. > > + if (vcpu->mode == IN_GUEST_MODE) > + apic->send_IPI_mask(get_cpu_mask(vcpu->cpu), > + POSTED_INTR_VECTOR); Please add a comment that PIR and ON have been set by the L1 hypervisor. I'll do a full review the other patches as soon as possible. Paolo > + else { > + r = -1; > + goto out; > + } > + > + /* > +* if posted intr is done by hardware, the > +* corresponding eoi was sent to L0. Thus > +* we should send eoi to L1 manually. > +*/ > + kvm_apic_set_eoi_accelerated(vcpu, > + vmcs12->posted_intr_nv); -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[PATCH v2 5/5] KVM: nVMX: Enable nested posted interrupt processing.
If vcpu has a interrupt in vmx non-root mode, we will kick that vcpu to inject interrupt timely. With posted interrupt processing, the kick intr is not needed, and interrupts are fully taken care of by hardware. In nested vmx, this feature avoids much more vmexits than non-nested vmx. This patch use L0's POSTED_INTR_NV to avoid unexpected interrupt if L1's vector is different with L0's. If vcpu is in hardware's non-root mode, we use a physical ipi to deliver posted interrupts, otherwise we will deliver that interrupt to L1 and kick that vcpu out of nested non-root mode. Signed-off-by: Wincy Van --- arch/x86/kvm/vmx.c | 136 ++-- 1 files changed, 132 insertions(+), 4 deletions(-) diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c index ea56e9f..cda9133 100644 --- a/arch/x86/kvm/vmx.c +++ b/arch/x86/kvm/vmx.c @@ -215,6 +215,7 @@ struct __packed vmcs12 { u64 tsc_offset; u64 virtual_apic_page_addr; u64 apic_access_addr; + u64 posted_intr_desc_addr; u64 ept_pointer; u64 eoi_exit_bitmap0; u64 eoi_exit_bitmap1; @@ -334,6 +335,7 @@ struct __packed vmcs12 { u32 vmx_preemption_timer_value; u32 padding32[7]; /* room for future expansion */ u16 virtual_processor_id; + u16 posted_intr_nv; u16 guest_es_selector; u16 guest_cs_selector; u16 guest_ss_selector; @@ -387,6 +389,7 @@ struct nested_vmx { /* The host-usable pointer to the above */ struct page *current_vmcs12_page; struct vmcs12 *current_vmcs12; + spinlock_t vmcs12_lock; struct vmcs *current_shadow_vmcs; /* * Indicates if the shadow vmcs must be updated with the @@ -406,6 +409,8 @@ struct nested_vmx { */ struct page *apic_access_page; struct page *virtual_apic_page; + struct page *pi_desc_page; + struct pi_desc *pi_desc; u64 msr_ia32_feature_control; struct hrtimer preemption_timer; @@ -621,6 +626,7 @@ static int max_shadow_read_write_fields = static const unsigned short vmcs_field_to_offset_table[] = { FIELD(VIRTUAL_PROCESSOR_ID, virtual_processor_id), + FIELD(POSTED_INTR_NV, posted_intr_nv), FIELD(GUEST_ES_SELECTOR, guest_es_selector), FIELD(GUEST_CS_SELECTOR, guest_cs_selector), FIELD(GUEST_SS_SELECTOR, guest_ss_selector), @@ -646,6 +652,7 @@ static const unsigned short vmcs_field_to_offset_table[] = { FIELD64(TSC_OFFSET, tsc_offset), FIELD64(VIRTUAL_APIC_PAGE_ADDR, virtual_apic_page_addr), FIELD64(APIC_ACCESS_ADDR, apic_access_addr), + FIELD64(POSTED_INTR_DESC_ADDR, posted_intr_desc_addr), FIELD64(EPT_POINTER, ept_pointer), FIELD64(EOI_EXIT_BITMAP0, eoi_exit_bitmap0), FIELD64(EOI_EXIT_BITMAP1, eoi_exit_bitmap1), @@ -798,6 +805,7 @@ static void kvm_cpu_vmxon(u64 addr); static void kvm_cpu_vmxoff(void); static bool vmx_mpx_supported(void); static bool vmx_xsaves_supported(void); +static int vmx_vm_has_apicv(struct kvm *kvm); static int vmx_set_tss_addr(struct kvm *kvm, unsigned int addr); static void vmx_set_segment(struct kvm_vcpu *vcpu, struct kvm_segment *var, int seg); @@ -1159,6 +1167,11 @@ static inline bool nested_cpu_has_vid(struct vmcs12 *vmcs12) return nested_cpu_has2(vmcs12, SECONDARY_EXEC_VIRTUAL_INTR_DELIVERY); } +static inline bool nested_cpu_has_posted_intr(struct vmcs12 *vmcs12) +{ + return vmcs12->pin_based_vm_exec_control & PIN_BASED_POSTED_INTR; +} + static inline bool is_exception(u32 intr_info) { return (intr_info & (INTR_INFO_INTR_TYPE_MASK | INTR_INFO_VALID_MASK)) @@ -2362,6 +2375,9 @@ static void nested_vmx_setup_ctls_msrs(struct vcpu_vmx *vmx) vmx->nested.nested_vmx_pinbased_ctls_high |= PIN_BASED_ALWAYSON_WITHOUT_TRUE_MSR | PIN_BASED_VMX_PREEMPTION_TIMER; + if (vmx_vm_has_apicv(vmx->vcpu.kvm)) + vmx->nested.nested_vmx_pinbased_ctls_high |= + PIN_BASED_POSTED_INTR; /* exit controls */ rdmsr(MSR_IA32_VMX_EXIT_CTLS, @@ -4267,6 +4283,46 @@ static int vmx_vm_has_apicv(struct kvm *kvm) return enable_apicv && irqchip_in_kernel(kvm); } +static int vmx_deliver_nested_posted_interrupt(struct kvm_vcpu *vcpu, + int vector) +{ + int r = 0; + struct vmcs12 *vmcs12; + + /* +* Since posted intr delivery is async, +* we must aquire a spin-lock to avoid +* the race of vmcs12. +*/ + spin_lock(&to_vmx(vcpu)->nested.vmcs12_lock); + vmcs12 = get_vmcs12(vcpu); + if (!is_guest_mode(vcpu) || !vmcs12) { + r = -1; + goto out; + } + if (vector == vmcs12->posted_intr_nv && + nested_cpu_has_posted_intr(vmcs12)) { + if (vcpu->mode == IN_GUEST_MODE) +
Re: [PATCH 5/5] KVM: nVMX: Enable nested posted interrupt processing.
On 20/01/2015 08:54, Wincy Van wrote: > On Tue, Jan 20, 2015 at 3:34 PM, Paolo Bonzini wrote: >>> Hence, we can disable local interrupts while delivering nested posted >>> interrupts to make sure >>> we are faster than the destination vcpu. This is a bit tricky but it >>> an avoid that race. I think we >>> do not need to add a spin lock here. RCU does not fit this case, since >>> it will introduce a >>> new race window between the rcu handler and handle_vmptr**. >>> >>> I am wondering that whether there is a better way : ) >> >> Why not just use a spinlock? >> > > Hmm.. it seems that using a spinlock is the best way. > I think we can drop the local_irq_save and use a spinlock instead. > I can send v2 if it is necessary, any more ideas? Yes, please send v2 of this patch only. Paolo -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html