On 2015-04-07 08:29, Valentine Sinitsyn wrote:
> On 07.04.2015 11:23, Jan Kiszka wrote:
>> On 2015-04-07 08:19, Valentine Sinitsyn wrote:
>>> On 07.04.2015 11:13, Jan Kiszka wrote:
>> It is, at least 160 cycles with hot caches on an AMD A6-5200 APU,
>> more
>> towards 600 if they are co
On 07.04.2015 11:23, Jan Kiszka wrote:
On 2015-04-07 08:19, Valentine Sinitsyn wrote:
On 07.04.2015 11:13, Jan Kiszka wrote:
It is, at least 160 cycles with hot caches on an AMD A6-5200 APU, more
towards 600 if they are colder (added some usleep to each loop in the
test).
Great, thanks. Could
On 2015-04-07 08:19, Valentine Sinitsyn wrote:
> On 07.04.2015 11:13, Jan Kiszka wrote:
It is, at least 160 cycles with hot caches on an AMD A6-5200 APU, more
towards 600 if they are colder (added some usleep to each loop in the
test).
>>> Great, thanks. Could you post absolute numbe
On 07.04.2015 11:13, Jan Kiszka wrote:
It is, at least 160 cycles with hot caches on an AMD A6-5200 APU, more
towards 600 if they are colder (added some usleep to each loop in the
test).
Great, thanks. Could you post absolute numbers, i.e how long do A and B
take on your CPU?
A is around 1910
On 2015-04-07 08:10, Valentine Sinitsyn wrote:
> Hi Jan,
>
> On 07.04.2015 10:43, Jan Kiszka wrote:
>> On 2015-04-05 19:12, Valentine Sinitsyn wrote:
>>> Hi Jan,
>>>
>>> On 05.04.2015 13:31, Jan Kiszka wrote:
studying the VM exit logic of Jailhouse, I was wondering when AMD's
vmload/vmsa
Hi Jan,
On 07.04.2015 10:43, Jan Kiszka wrote:
On 2015-04-05 19:12, Valentine Sinitsyn wrote:
Hi Jan,
On 05.04.2015 13:31, Jan Kiszka wrote:
studying the VM exit logic of Jailhouse, I was wondering when AMD's
vmload/vmsave can be avoided. Jailhouse as well as KVM currently use
these instructi
On 2015-04-05 19:12, Valentine Sinitsyn wrote:
> Hi Jan,
>
> On 05.04.2015 13:31, Jan Kiszka wrote:
>> studying the VM exit logic of Jailhouse, I was wondering when AMD's
>> vmload/vmsave can be avoided. Jailhouse as well as KVM currently use
>> these instructions unconditionally. However, I think
This patch introduces a new generic queue spinlock implementation that
can serve as an alternative to the default ticket spinlock. Compared
with the ticket spinlock, this queue spinlock should be almost as fair
as the ticket spinlock. It has about the same speed in single-thread
and it can be much
v14->v15:
- Incorporate PeterZ's v15 qspinlock patch and improve upon the PV
qspinlock code by dynamically allocating the hash table as well
as some other performance optimization.
- Simplified the Xen PV qspinlock code as suggested by David Vrabel
.
- Add benchmarking data for 3.19 ker
This patch is based on the code sent out by Peter Zijstra as part
of his queue spinlock patch to provide a hashing function with open
addressing. The lfsr() function can be used to return a sequence of
numbers that cycle through all the bit patterns (2^n -1) of a given
bit width n except the value
This is a preparatory patch that extracts out the following 2 code
snippets to prepare for the next performance optimization patch.
1) the logic for the exchange of new and previous tail code words
into a new xchg_tail() function.
2) the logic for clearing the pending bit and setting the loc
From: Peter Zijlstra (Intel)
Because the qspinlock needs to touch a second cacheline (the per-cpu
mcs_nodes[]); add a pending bit and allow a single in-word spinner
before we punt to the second cacheline.
It is possible so observe the pending bit without the locked bit when
the last owner has ju
From: Peter Zijlstra (Intel)
When we allow for a max NR_CPUS < 2^14 we can optimize the pending
wait-acquire and the xchg_tail() operations.
By growing the pending bit to a byte, we reduce the tail to 16bit.
This means we can use xchg16 for the tail part and do away with all
the repeated compxch
Provide a separate (second) version of the spin_lock_slowpath for
paravirt along with a special unlock path.
The second slowpath is generated by adding a few pv hooks to the
normal slowpath, but where those will compile away for the native
case, they expand into special wait/wake code for the pv v
From: Peter Zijlstra (Intel)
When we detect a hypervisor (!paravirt, see qspinlock paravirt support
patches), revert to a simple test-and-set lock to avoid the horrors
of queue preemption.
Signed-off-by: Peter Zijlstra (Intel)
Signed-off-by: Waiman Long
---
arch/x86/include/asm/qspinlock.h |
Before this patch, a CPU may have been kicked twice before getting
the lock - one before it becomes queue head and once before it gets
the lock. All these CPU kicking and halting (VMEXIT) can be expensive
and slow down system performance, especially in an overcommitted guest.
This patch add a new
The current code for PV lock hash table processing will panic the
system if pv_hash_find() can't find the desired hash bucket. However,
there is no check to see if there is more than one entry for a given
lock which should never happen.
This patch adds a pv_hash_check_duplicate() function to do th
From: Peter Zijlstra (Intel)
We use the regular paravirt call patching to switch between:
native_queue_spin_lock_slowpath() __pv_queue_spin_lock_slowpath()
native_queue_spin_unlock()__pv_queue_spin_unlock()
We use a callee saved call for the unlock function which reduces the
This patch adds the necessary Xen specific code to allow Xen to
support the CPU halting and kicking operations needed by the queue
spinlock PV code.
Signed-off-by: Waiman Long
---
arch/x86/xen/spinlock.c | 63 ---
kernel/Kconfig.locks|2 +-
2
Currently, atomic_cmpxchg() is used to get the lock. However, this
is not really necessary if there is more than one task in the queue
and the queue head don't need to reset the tail code. For that case,
a simple write to set the lock bit is enough as the queue head will
be the only one eligible to
This patch adds the necessary KVM specific code to allow KVM to
support the CPU halting and kicking operations needed by the queue
spinlock PV code.
Signed-off-by: Waiman Long
---
arch/x86/kernel/kvm.c | 43 +++
kernel/Kconfig.locks |2 +-
2 files c
This patch makes the necessary changes at the x86 architecture
specific layer to enable the use of queue spinlock for x86-64. As
x86-32 machines are typically not multi-socket. The benefit of queue
spinlock may not be apparent. So queue spinlock is not enabled.
Currently, there is some incompatibi
In the pv_scan_next() function, the slow cmpxchg atomic operation is
performed even if the other CPU is not even close to being halted. This
extra cmpxchg can harm slowpath performance.
This patch introduces the new mayhalt flag to indicate if the other
spinning CPU is close to being halted or not
On 06/04/2015 22:07, Andy Lutomirski wrote:
> On 04/02/2015 11:59 AM, Andy Lutomirski wrote:
>> On Thu, Apr 2, 2015 at 11:44 AM, Radim Krčmář wrote:
>>> If we were migrated right after __getcpu, but before reading the
>>> migration_count, we wouldn't notice that we read TSC of a different
>>> VC
This is a note to let you know that I have just added a patch titled
KVM: MIPS: Fix trace event to save PC directly
to the linux-3.13.y-queue branch of the 3.13.y-ckt extended stable tree
which can be found at:
http://kernel.ubuntu.com/git?p=ubuntu/linux.git;a=shortlog;h=refs/heads/linux-
On Thu, Apr 02, 2015 at 08:44:23PM +0200, Radim Krčmář wrote:
> If we were migrated right after __getcpu, but before reading the
> migration_count, we wouldn't notice that we read TSC of a different
> VCPU, nor that KVM's bug made pvti invalid, as only migration_count
> on source VCPU is increased.
On 04/02/2015 11:59 AM, Andy Lutomirski wrote:
On Thu, Apr 2, 2015 at 11:44 AM, Radim Krčmář wrote:
If we were migrated right after __getcpu, but before reading the
migration_count, we wouldn't notice that we read TSC of a different
VCPU, nor that KVM's bug made pvti invalid, as only migration_
27 matches
Mail list logo