On Thu, Mar 13, 2014 at 04:05:19PM -0400, Waiman Long wrote:
> On 03/13/2014 11:15 AM, Peter Zijlstra wrote:
> >On Wed, Mar 12, 2014 at 02:54:52PM -0400, Waiman Long wrote:
> >>+static inline void arch_spin_lock(struct qspinlock *lock)
> >>+{
> >>+ if (static_key_false(¶virt_unfairlocks_enabled))
Il 13/03/2014 20:13, Waiman Long ha scritto:
This should also disable the unfair path.
Paolo
The unfair lock uses a different jump label and does not require any
special PV ops. There is a separate init function for that.
Yeah, what I mean is that the patches that enable paravirtualizati
Il 14/03/2014 09:30, Peter Zijlstra ha scritto:
Take the situation of 3 (v)CPUs where cpu0 holds the lock but is
preempted. cpu1 queues, cpu2 queues. Then cpu1 gets preempted, after
which cpu0 gets back online.
The simple test-and-set lock will now let cpu2 acquire. Your queue
however will just
Il 13/03/2014 20:49, Waiman Long ha scritto:
> On 03/13/2014 09:57 AM, Paolo Bonzini wrote:
>> Il 13/03/2014 12:21, David Vrabel ha scritto:
>>> On 12/03/14 18:54, Waiman Long wrote:
This patch adds para-virtualization support to the queue spinlock in
the same way as was done in the PV ti
The current virtio block sets a queue depth of 64. With a
sufficiently fast device, using a queue depth of 256 can double the
IOPS which can be sustained. So make the queue depth something which
can be set at module load time or via a kernel boot-time parameter.
Signed-off-by: "Theodore Ts'o"
C
On Fri, 2014-03-14 at 13:31 -0400, Theodore Ts'o wrote:
> The current virtio block sets a queue depth of 64. With a
> sufficiently fast device, using a queue depth of 256 can double the
> IOPS which can be sustained. So make the queue depth something which
> can be set at module load time or via
On Fri, Mar 14, 2014 at 10:38:40AM -0700, Joe Perches wrote:
> > +static int queue_depth = 64;
> > +module_param(queue_depth, int, 444);
>
> 444? Really Ted?
Oops, *blush*. Thanks for catching that.
- Ted
___
On Fri, Mar 14, 2014 at 10:57:58AM -0600, David Ahern wrote:
On 3/14/14, 10:17 AM, Andi Kleen wrote:
The Intel ISR section for RDMSR seems to say: "Specifying a reserved
or unimplemented
MSR address in ECX will also cause a general protection exception".
From a guest's perspective, MSR_RAPL_POW
virtio-blk set the default queue depth to 64 requests, which was
insufficient for high-IOPS devices. Instead set the blk-queue depth to
the device's virtqueue depth divided by two (each I/O requires at least
two VQ entries).
Signed-off-by: Venkatesh Srinivas
---
drivers/block/virtio_blk.c | 2 +-
The current virtio block sets a queue depth of 64, which is
insufficient for very fast devices. It has been demonstrated that
with a high IOPS device, using a queue depth of 256 can double the
IOPS which can be sustained.
As suggested by Venkatash Srinivas, set the queue depth by default to
be on
10 matches
Mail list logo