Re: [PATCH v6 05/11] pvqspinlock, x86: Allow unfair spinlock in a PV guest

2014-03-14 Thread Peter Zijlstra
On Thu, Mar 13, 2014 at 04:05:19PM -0400, Waiman Long wrote: > On 03/13/2014 11:15 AM, Peter Zijlstra wrote: > >On Wed, Mar 12, 2014 at 02:54:52PM -0400, Waiman Long wrote: > >>+static inline void arch_spin_lock(struct qspinlock *lock) > >>+{ > >>+ if (static_key_false(¶virt_unfairlocks_enabled))

Re: [PATCH RFC v6 10/11] pvqspinlock, x86: Enable qspinlock PV support for KVM

2014-03-14 Thread Paolo Bonzini
Il 13/03/2014 20:13, Waiman Long ha scritto: This should also disable the unfair path. Paolo The unfair lock uses a different jump label and does not require any special PV ops. There is a separate init function for that. Yeah, what I mean is that the patches that enable paravirtualizati

Re: [PATCH v6 05/11] pvqspinlock, x86: Allow unfair spinlock in a PV guest

2014-03-14 Thread Paolo Bonzini
Il 14/03/2014 09:30, Peter Zijlstra ha scritto: Take the situation of 3 (v)CPUs where cpu0 holds the lock but is preempted. cpu1 queues, cpu2 queues. Then cpu1 gets preempted, after which cpu0 gets back online. The simple test-and-set lock will now let cpu2 acquire. Your queue however will just

Re: [PATCH RFC v6 09/11] pvqspinlock, x86: Add qspinlock para-virtualization support

2014-03-14 Thread Paolo Bonzini
Il 13/03/2014 20:49, Waiman Long ha scritto: > On 03/13/2014 09:57 AM, Paolo Bonzini wrote: >> Il 13/03/2014 12:21, David Vrabel ha scritto: >>> On 12/03/14 18:54, Waiman Long wrote: This patch adds para-virtualization support to the queue spinlock in the same way as was done in the PV ti

[PATCH] virtio-blk: make the queue depth configurable

2014-03-14 Thread Theodore Ts'o
The current virtio block sets a queue depth of 64. With a sufficiently fast device, using a queue depth of 256 can double the IOPS which can be sustained. So make the queue depth something which can be set at module load time or via a kernel boot-time parameter. Signed-off-by: "Theodore Ts'o" C

Re: [PATCH] virtio-blk: make the queue depth configurable

2014-03-14 Thread Joe Perches
On Fri, 2014-03-14 at 13:31 -0400, Theodore Ts'o wrote: > The current virtio block sets a queue depth of 64. With a > sufficiently fast device, using a queue depth of 256 can double the > IOPS which can be sustained. So make the queue depth something which > can be set at module load time or via

Re: [PATCH] virtio-blk: make the queue depth configurable

2014-03-14 Thread Theodore Ts'o
On Fri, Mar 14, 2014 at 10:38:40AM -0700, Joe Perches wrote: > > +static int queue_depth = 64; > > +module_param(queue_depth, int, 444); > > 444? Really Ted? Oops, *blush*. Thanks for catching that. - Ted ___

Re: [PATCH] perf/x86/intel: Use rdmsrl_safe when initializing RAPL PMU.

2014-03-14 Thread Venkatesh Srinivas
On Fri, Mar 14, 2014 at 10:57:58AM -0600, David Ahern wrote: On 3/14/14, 10:17 AM, Andi Kleen wrote: The Intel ISR section for RDMSR seems to say: "Specifying a reserved or unimplemented MSR address in ECX will also cause a general protection exception". From a guest's perspective, MSR_RAPL_POW

[PATCH] virtio-blk: Initialize blkqueue depth from virtqueue size

2014-03-14 Thread Venkatesh Srinivas
virtio-blk set the default queue depth to 64 requests, which was insufficient for high-IOPS devices. Instead set the blk-queue depth to the device's virtqueue depth divided by two (each I/O requires at least two VQ entries). Signed-off-by: Venkatesh Srinivas --- drivers/block/virtio_blk.c | 2 +-

[PATCH] virtio-blk: make the queue depth the max supportable by the hypervisor

2014-03-14 Thread Theodore Ts'o
The current virtio block sets a queue depth of 64, which is insufficient for very fast devices. It has been demonstrated that with a high IOPS device, using a queue depth of 256 can double the IOPS which can be sustained. As suggested by Venkatash Srinivas, set the queue depth by default to be on