Re: [PATCH] virtio-blk: make the queue depth the max supportable by the hypervisor

2014-04-03 Thread Rusty Russell
Stefan Hajnoczi  writes:
> On Tue, Apr 1, 2014 at 4:27 AM, Theodore Ts'o  wrote:
>> On Mon, Mar 31, 2014 at 02:22:50PM +1030, Rusty Russell wrote:
>>>
>>> It's head of my virtio-next tree.
>>
>> Hey Rusty,
>>
>> While we have your attention --- what's your opinion about adding TRIM
>> support to virtio-blk.  I understand that you're starting an OASIS
>> standardization process for virtio --- what does that mean vis-a-vis a
>> patch to plumb discard support through virtio-blk?
>
> virtio-scsi already supports discard.  But maybe you cannot switch
> away from virtio-blk?
>
> If you need to add discard to virtio-blk then it could be added to the
> standard.  The standards text is kept in a svn repository here:
> https://tools.oasis-open.org/version-control/browse/wsvn/virtio/

It would be trivial to add, and I wouldn't be completely opposed, but we
generally point to virtio-scsi when people want actual features.

Cheers,
Rusty.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] virtio-blk: make the queue depth the max supportable by the hypervisor

2014-04-03 Thread Rusty Russell
Stefan Hajnoczi stefa...@gmail.com writes:
 On Tue, Apr 1, 2014 at 4:27 AM, Theodore Ts'o ty...@mit.edu wrote:
 On Mon, Mar 31, 2014 at 02:22:50PM +1030, Rusty Russell wrote:

 It's head of my virtio-next tree.

 Hey Rusty,

 While we have your attention --- what's your opinion about adding TRIM
 support to virtio-blk.  I understand that you're starting an OASIS
 standardization process for virtio --- what does that mean vis-a-vis a
 patch to plumb discard support through virtio-blk?

 virtio-scsi already supports discard.  But maybe you cannot switch
 away from virtio-blk?

 If you need to add discard to virtio-blk then it could be added to the
 standard.  The standards text is kept in a svn repository here:
 https://tools.oasis-open.org/version-control/browse/wsvn/virtio/

It would be trivial to add, and I wouldn't be completely opposed, but we
generally point to virtio-scsi when people want actual features.

Cheers,
Rusty.
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] virtio-blk: make the queue depth the max supportable by the hypervisor

2014-04-01 Thread Stefan Hajnoczi
On Tue, Apr 1, 2014 at 4:27 AM, Theodore Ts'o  wrote:
> On Mon, Mar 31, 2014 at 02:22:50PM +1030, Rusty Russell wrote:
>>
>> It's head of my virtio-next tree.
>
> Hey Rusty,
>
> While we have your attention --- what's your opinion about adding TRIM
> support to virtio-blk.  I understand that you're starting an OASIS
> standardization process for virtio --- what does that mean vis-a-vis a
> patch to plumb discard support through virtio-blk?

virtio-scsi already supports discard.  But maybe you cannot switch
away from virtio-blk?

If you need to add discard to virtio-blk then it could be added to the
standard.  The standards text is kept in a svn repository here:
https://tools.oasis-open.org/version-control/browse/wsvn/virtio/

Stefan
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] virtio-blk: make the queue depth the max supportable by the hypervisor

2014-04-01 Thread Stefan Hajnoczi
On Tue, Apr 1, 2014 at 4:27 AM, Theodore Ts'o ty...@mit.edu wrote:
 On Mon, Mar 31, 2014 at 02:22:50PM +1030, Rusty Russell wrote:

 It's head of my virtio-next tree.

 Hey Rusty,

 While we have your attention --- what's your opinion about adding TRIM
 support to virtio-blk.  I understand that you're starting an OASIS
 standardization process for virtio --- what does that mean vis-a-vis a
 patch to plumb discard support through virtio-blk?

virtio-scsi already supports discard.  But maybe you cannot switch
away from virtio-blk?

If you need to add discard to virtio-blk then it could be added to the
standard.  The standards text is kept in a svn repository here:
https://tools.oasis-open.org/version-control/browse/wsvn/virtio/

Stefan
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] virtio-blk: make the queue depth the max supportable by the hypervisor

2014-03-31 Thread Theodore Ts'o
On Mon, Mar 31, 2014 at 02:22:50PM +1030, Rusty Russell wrote:
> 
> It's head of my virtio-next tree.

Hey Rusty,

While we have your attention --- what's your opinion about adding TRIM
support to virtio-blk.  I understand that you're starting an OASIS
standardization process for virtio --- what does that mean vis-a-vis a
patch to plumb discard support through virtio-blk?

Thanks!

- Ted
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] virtio-blk: make the queue depth the max supportable by the hypervisor

2014-03-31 Thread Theodore Ts'o
On Mon, Mar 31, 2014 at 02:22:50PM +1030, Rusty Russell wrote:
 
 It's head of my virtio-next tree.

Hey Rusty,

While we have your attention --- what's your opinion about adding TRIM
support to virtio-blk.  I understand that you're starting an OASIS
standardization process for virtio --- what does that mean vis-a-vis a
patch to plumb discard support through virtio-blk?

Thanks!

- Ted
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] virtio-blk: make the queue depth the max supportable by the hypervisor

2014-03-30 Thread Rusty Russell
Venkatesh Srinivas  writes:
> On Wed, Mar 19, 2014 at 10:48 AM, Venkatesh Srinivas
>  wrote:
>>> And I rewrote it substantially, mainly to take
>>> VIRTIO_RING_F_INDIRECT_DESC into account.
>>>
>>> As QEMU sets the vq size for PCI to 128, Venkatash's patch wouldn't
>>> have made a change.  This version does (since QEMU also offers
>>> VIRTIO_RING_F_INDIRECT_DESC.
>>
>> That divide-by-2 produced the same queue depth as the prior
>> computation in QEMU was deliberate -- but raising it to 128 seems
>> pretty reasonable.
>>
>> Signed-off-by: Venkatesh Srinivas 
>
> Soft ping about this patch.

It's head of my virtio-next tree.

Cheers,
Rusty.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] virtio-blk: make the queue depth the max supportable by the hypervisor

2014-03-30 Thread Rusty Russell
Venkatesh Srinivas venkate...@google.com writes:
 On Wed, Mar 19, 2014 at 10:48 AM, Venkatesh Srinivas
 venkate...@google.com wrote:
 And I rewrote it substantially, mainly to take
 VIRTIO_RING_F_INDIRECT_DESC into account.

 As QEMU sets the vq size for PCI to 128, Venkatash's patch wouldn't
 have made a change.  This version does (since QEMU also offers
 VIRTIO_RING_F_INDIRECT_DESC.

 That divide-by-2 produced the same queue depth as the prior
 computation in QEMU was deliberate -- but raising it to 128 seems
 pretty reasonable.

 Signed-off-by: Venkatesh Srinivas venkate...@google.com

 Soft ping about this patch.

It's head of my virtio-next tree.

Cheers,
Rusty.
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] virtio-blk: make the queue depth the max supportable by the hypervisor

2014-03-25 Thread Venkatesh Srinivas
On Wed, Mar 19, 2014 at 10:48 AM, Venkatesh Srinivas
 wrote:
>> And I rewrote it substantially, mainly to take
>> VIRTIO_RING_F_INDIRECT_DESC into account.
>>
>> As QEMU sets the vq size for PCI to 128, Venkatash's patch wouldn't
>> have made a change.  This version does (since QEMU also offers
>> VIRTIO_RING_F_INDIRECT_DESC.
>
> That divide-by-2 produced the same queue depth as the prior
> computation in QEMU was deliberate -- but raising it to 128 seems
> pretty reasonable.
>
> Signed-off-by: Venkatesh Srinivas 

Soft ping about this patch.

Thanks,
-- vs;
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] virtio-blk: make the queue depth the max supportable by the hypervisor

2014-03-25 Thread Venkatesh Srinivas
On Wed, Mar 19, 2014 at 10:48 AM, Venkatesh Srinivas
venkate...@google.com wrote:
 And I rewrote it substantially, mainly to take
 VIRTIO_RING_F_INDIRECT_DESC into account.

 As QEMU sets the vq size for PCI to 128, Venkatash's patch wouldn't
 have made a change.  This version does (since QEMU also offers
 VIRTIO_RING_F_INDIRECT_DESC.

 That divide-by-2 produced the same queue depth as the prior
 computation in QEMU was deliberate -- but raising it to 128 seems
 pretty reasonable.

 Signed-off-by: Venkatesh Srinivas venkate...@google.com

Soft ping about this patch.

Thanks,
-- vs;
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] virtio-blk: make the queue depth the max supportable by the hypervisor

2014-03-19 Thread Venkatesh Srinivas
> And I rewrote it substantially, mainly to take
> VIRTIO_RING_F_INDIRECT_DESC into account.
>
> As QEMU sets the vq size for PCI to 128, Venkatash's patch wouldn't
> have made a change.  This version does (since QEMU also offers
> VIRTIO_RING_F_INDIRECT_DESC.

That divide-by-2 produced the same queue depth as the prior
computation in QEMU was deliberate -- but raising it to 128 seems
pretty reasonable.

Signed-off-by: Venkatesh Srinivas 

-- vs;
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] virtio-blk: make the queue depth the max supportable by the hypervisor

2014-03-19 Thread Rusty Russell
ty...@mit.edu writes:
> On Mon, Mar 17, 2014 at 11:12:15AM +1030, Rusty Russell wrote:
>> 
>> Note that with indirect descriptors (which is supported by Almost
>> Everyone), we can actually use the full index, so this value is a bit
>> pessimistic.  But it's OK as a starting point.
>
> So is this something that can go upstream with perhaps a slight
> adjustment in the commit description?

Well, I rewrote it again, see below.

> Do you think we need to be able
> to dynamically adjust the queue depth after the module has been loaded
> or the kernel has been booted?

That would be nice, sure, but...

> If so, anyone a hint about the best
> way to do that would be much appreciated.

... I share your wonder and mystery at the ways of the block layer.

Subject: virtio-blk: base queue-depth on virtqueue ringsize or module param

Venkatash spake thus:

  virtio-blk set the default queue depth to 64 requests, which was
  insufficient for high-IOPS devices. Instead set the blk-queue depth to
  the device's virtqueue depth divided by two (each I/O requires at least
  two VQ entries).

But behold, Ted added a module parameter:

  Also allow the queue depth to be something which can be set at module
  load time or via a kernel boot-time parameter, for
  testing/benchmarking purposes.

And I rewrote it substantially, mainly to take
VIRTIO_RING_F_INDIRECT_DESC into account.

As QEMU sets the vq size for PCI to 128, Venkatash's patch wouldn't
have made a change.  This version does (since QEMU also offers
VIRTIO_RING_F_INDIRECT_DESC.

Inspired-by: "Theodore Ts'o" 
Based-on-the-true-story-of: Venkatesh Srinivas 
Cc: "Michael S. Tsirkin" 
Cc: virtio-...@lists.oasis-open.org
Cc: virtualizat...@lists.linux-foundation.org
Cc: Frank Swiderski 
Signed-off-by: Rusty Russell 

diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c
index a2db9ed288f2..c101bbc72095 100644
--- a/drivers/block/virtio_blk.c
+++ b/drivers/block/virtio_blk.c
@@ -491,10 +491,11 @@ static struct blk_mq_ops virtio_mq_ops = {
 static struct blk_mq_reg virtio_mq_reg = {
.ops= _mq_ops,
.nr_hw_queues   = 1,
-   .queue_depth= 64,
+   .queue_depth= 0, /* Set in virtblk_probe */
.numa_node  = NUMA_NO_NODE,
.flags  = BLK_MQ_F_SHOULD_MERGE,
 };
+module_param_named(queue_depth, virtio_mq_reg.queue_depth, uint, 0444);
 
 static void virtblk_init_vbr(void *data, struct blk_mq_hw_ctx *hctx,
 struct request *rq, unsigned int nr)
@@ -558,6 +559,13 @@ static int virtblk_probe(struct virtio_device *vdev)
goto out_free_vq;
}
 
+   /* Default queue sizing is to fill the ring. */
+   if (!virtio_mq_reg.queue_depth) {
+   virtio_mq_reg.queue_depth = vblk->vq->num_free;
+   /* ... but without indirect descs, we use 2 descs per req */
+   if (!virtio_has_feature(vdev, VIRTIO_RING_F_INDIRECT_DESC))
+   virtio_mq_reg.queue_depth /= 2;
+   }
virtio_mq_reg.cmd_size =
sizeof(struct virtblk_req) +
sizeof(struct scatterlist) * sg_elems;
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] virtio-blk: make the queue depth the max supportable by the hypervisor

2014-03-19 Thread Rusty Russell
ty...@mit.edu writes:
 On Mon, Mar 17, 2014 at 11:12:15AM +1030, Rusty Russell wrote:
 
 Note that with indirect descriptors (which is supported by Almost
 Everyone), we can actually use the full index, so this value is a bit
 pessimistic.  But it's OK as a starting point.

 So is this something that can go upstream with perhaps a slight
 adjustment in the commit description?

Well, I rewrote it again, see below.

 Do you think we need to be able
 to dynamically adjust the queue depth after the module has been loaded
 or the kernel has been booted?

That would be nice, sure, but...

 If so, anyone a hint about the best
 way to do that would be much appreciated.

... I share your wonder and mystery at the ways of the block layer.

Subject: virtio-blk: base queue-depth on virtqueue ringsize or module param

Venkatash spake thus:

  virtio-blk set the default queue depth to 64 requests, which was
  insufficient for high-IOPS devices. Instead set the blk-queue depth to
  the device's virtqueue depth divided by two (each I/O requires at least
  two VQ entries).

But behold, Ted added a module parameter:

  Also allow the queue depth to be something which can be set at module
  load time or via a kernel boot-time parameter, for
  testing/benchmarking purposes.

And I rewrote it substantially, mainly to take
VIRTIO_RING_F_INDIRECT_DESC into account.

As QEMU sets the vq size for PCI to 128, Venkatash's patch wouldn't
have made a change.  This version does (since QEMU also offers
VIRTIO_RING_F_INDIRECT_DESC.

Inspired-by: Theodore Ts'o ty...@mit.edu
Based-on-the-true-story-of: Venkatesh Srinivas venkate...@google.com
Cc: Michael S. Tsirkin m...@redhat.com
Cc: virtio-...@lists.oasis-open.org
Cc: virtualizat...@lists.linux-foundation.org
Cc: Frank Swiderski f...@google.com
Signed-off-by: Rusty Russell ru...@rustcorp.com.au

diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c
index a2db9ed288f2..c101bbc72095 100644
--- a/drivers/block/virtio_blk.c
+++ b/drivers/block/virtio_blk.c
@@ -491,10 +491,11 @@ static struct blk_mq_ops virtio_mq_ops = {
 static struct blk_mq_reg virtio_mq_reg = {
.ops= virtio_mq_ops,
.nr_hw_queues   = 1,
-   .queue_depth= 64,
+   .queue_depth= 0, /* Set in virtblk_probe */
.numa_node  = NUMA_NO_NODE,
.flags  = BLK_MQ_F_SHOULD_MERGE,
 };
+module_param_named(queue_depth, virtio_mq_reg.queue_depth, uint, 0444);
 
 static void virtblk_init_vbr(void *data, struct blk_mq_hw_ctx *hctx,
 struct request *rq, unsigned int nr)
@@ -558,6 +559,13 @@ static int virtblk_probe(struct virtio_device *vdev)
goto out_free_vq;
}
 
+   /* Default queue sizing is to fill the ring. */
+   if (!virtio_mq_reg.queue_depth) {
+   virtio_mq_reg.queue_depth = vblk-vq-num_free;
+   /* ... but without indirect descs, we use 2 descs per req */
+   if (!virtio_has_feature(vdev, VIRTIO_RING_F_INDIRECT_DESC))
+   virtio_mq_reg.queue_depth /= 2;
+   }
virtio_mq_reg.cmd_size =
sizeof(struct virtblk_req) +
sizeof(struct scatterlist) * sg_elems;
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] virtio-blk: make the queue depth the max supportable by the hypervisor

2014-03-19 Thread Venkatesh Srinivas
 And I rewrote it substantially, mainly to take
 VIRTIO_RING_F_INDIRECT_DESC into account.

 As QEMU sets the vq size for PCI to 128, Venkatash's patch wouldn't
 have made a change.  This version does (since QEMU also offers
 VIRTIO_RING_F_INDIRECT_DESC.

That divide-by-2 produced the same queue depth as the prior
computation in QEMU was deliberate -- but raising it to 128 seems
pretty reasonable.

Signed-off-by: Venkatesh Srinivas venkate...@google.com

-- vs;
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] virtio-blk: make the queue depth the max supportable by the hypervisor

2014-03-16 Thread tytso
On Mon, Mar 17, 2014 at 11:12:15AM +1030, Rusty Russell wrote:
> 
> Note that with indirect descriptors (which is supported by Almost
> Everyone), we can actually use the full index, so this value is a bit
> pessimistic.  But it's OK as a starting point.

So is this something that can go upstream with perhaps a slight
adjustment in the commit description?  Do you think we need to be able
to dynamically adjust the queue depth after the module has been loaded
or the kernel has been booted?  If so, anyone a hint about the best
way to do that would be much appreciated.

Thanks,

- Ted
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] virtio-blk: make the queue depth the max supportable by the hypervisor

2014-03-16 Thread Rusty Russell
Theodore Ts'o  writes:
> The current virtio block sets a queue depth of 64, which is
> insufficient for very fast devices.  It has been demonstrated that
> with a high IOPS device, using a queue depth of 256 can double the
> IOPS which can be sustained.
>
> As suggested by Venkatash Srinivas, set the queue depth by default to
> be one half the the device's virtqueue, which is the maximum queue
> depth that can be supported by the channel to the host OS (each I/O
> request requires at least two VQ entries).
>
> Also allow the queue depth to be something which can be set at module
> load time or via a kernel boot-time parameter, for
> testing/benchmarking purposes.

Note that with indirect descriptors (which is supported by Almost
Everyone), we can actually use the full index, so this value is a bit
pessimistic.  But it's OK as a starting point.

Cheers,
Rusty.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] virtio-blk: make the queue depth the max supportable by the hypervisor

2014-03-16 Thread Rusty Russell
Theodore Ts'o ty...@mit.edu writes:
 The current virtio block sets a queue depth of 64, which is
 insufficient for very fast devices.  It has been demonstrated that
 with a high IOPS device, using a queue depth of 256 can double the
 IOPS which can be sustained.

 As suggested by Venkatash Srinivas, set the queue depth by default to
 be one half the the device's virtqueue, which is the maximum queue
 depth that can be supported by the channel to the host OS (each I/O
 request requires at least two VQ entries).

 Also allow the queue depth to be something which can be set at module
 load time or via a kernel boot-time parameter, for
 testing/benchmarking purposes.

Note that with indirect descriptors (which is supported by Almost
Everyone), we can actually use the full index, so this value is a bit
pessimistic.  But it's OK as a starting point.

Cheers,
Rusty.
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] virtio-blk: make the queue depth the max supportable by the hypervisor

2014-03-16 Thread tytso
On Mon, Mar 17, 2014 at 11:12:15AM +1030, Rusty Russell wrote:
 
 Note that with indirect descriptors (which is supported by Almost
 Everyone), we can actually use the full index, so this value is a bit
 pessimistic.  But it's OK as a starting point.

So is this something that can go upstream with perhaps a slight
adjustment in the commit description?  Do you think we need to be able
to dynamically adjust the queue depth after the module has been loaded
or the kernel has been booted?  If so, anyone a hint about the best
way to do that would be much appreciated.

Thanks,

- Ted
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] virtio-blk: make the queue depth the max supportable by the hypervisor

2014-03-15 Thread Theodore Ts'o
On Sat, Mar 15, 2014 at 06:57:23AM -0700, Christoph Hellwig wrote:
> I don't think this should be a module parameter.  The default sizing
> should be based of the parameters of the actual virtqueue, and if we
> want to allow tuning it it should be by a sysfs attribute, preferable
> using the same semantics as SCSI.

I wanted that too, but looking at the multiqueue code, it wasn't all
obvious how to safely adjust the queue depth once the virtio-blk
device driver is initialized and becomes active.  There are all sorts
data structures including bitmaps, etc. that would have to be resized,
and I decided it would be too difficult / risky for me to make it be
dynamically resizeable.

So I settled on a module parameter thinking it would mostly only used
by testers / benchmarkers.

Can someone suggest a way to do a dynamic resizing of the virtio-blk
queue depth easily / safely?

- Ted
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] virtio-blk: make the queue depth the max supportable by the hypervisor

2014-03-15 Thread Christoph Hellwig
On Fri, Mar 14, 2014 at 11:34:31PM -0400, Theodore Ts'o wrote:
> The current virtio block sets a queue depth of 64, which is
> insufficient for very fast devices.  It has been demonstrated that
> with a high IOPS device, using a queue depth of 256 can double the
> IOPS which can be sustained.
> 
> As suggested by Venkatash Srinivas, set the queue depth by default to
> be one half the the device's virtqueue, which is the maximum queue
> depth that can be supported by the channel to the host OS (each I/O
> request requires at least two VQ entries).

I don't think this should be a module parameter.  The default sizing
should be based of the parameters of the actual virtqueue, and if we
want to allow tuning it it should be by a sysfs attribute, preferable
using the same semantics as SCSI.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] virtio-blk: make the queue depth the max supportable by the hypervisor

2014-03-15 Thread Theodore Ts'o
On Sat, Mar 15, 2014 at 06:57:01AM -0400, Konrad Rzeszutek Wilk wrote:
> >+pr_info("%s: using queue depth %d\n", vblk->disk->disk_name,
> >+virtio_mq_reg.queue_depth);
> 
> Isn't that visible from sysfs?

As near as I can tell, it's not.  I haven't been able to find anything
that either represents this value, or can be calculated from this
value.  Maybe I missed something?

- Ted
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] virtio-blk: make the queue depth the max supportable by the hypervisor

2014-03-15 Thread Konrad Rzeszutek Wilk
On March 14, 2014 11:34:31 PM EDT, Theodore Ts'o  wrote:
>The current virtio block sets a queue depth of 64, which is
>insufficient for very fast devices.  It has been demonstrated that
>with a high IOPS device, using a queue depth of 256 can double the
>IOPS which can be sustained.
>
>As suggested by Venkatash Srinivas, set the queue depth by default to
>be one half the the device's virtqueue, which is the maximum queue
>depth that can be supported by the channel to the host OS (each I/O
>request requires at least two VQ entries).
>
>Also allow the queue depth to be something which can be set at module
>load time or via a kernel boot-time parameter, for
>testing/benchmarking purposes.
>
>Signed-off-by: "Theodore Ts'o" 
>Signed-off-by: Venkatesh Srinivas 
>Cc: Rusty Russell 
>Cc: "Michael S. Tsirkin" 
>Cc: virtio-...@lists.oasis-open.org
>Cc: virtualizat...@lists.linux-foundation.org
>Cc: Frank Swiderski 
>---
>
>This is a combination of my patch and Vekatash's patch.  I agree that
>setting the default automatically is better than requiring the user to
>set the value by hand.
>
> drivers/block/virtio_blk.c | 10 --
> 1 file changed, 8 insertions(+), 2 deletions(-)
>
>diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c
>index 6a680d4..0f70c01 100644
>--- a/drivers/block/virtio_blk.c
>+++ b/drivers/block/virtio_blk.c
>@@ -481,6 +481,9 @@ static struct blk_mq_ops virtio_mq_ops = {
>   .free_hctx  = blk_mq_free_single_hw_queue,
> };
> 
>+static int queue_depth = -1;
>+module_param(queue_depth, int, 0444);

?

>+
> static struct blk_mq_reg virtio_mq_reg = {
>   .ops= _mq_ops,
>   .nr_hw_queues   = 1,
>@@ -551,9 +554,14 @@ static int virtblk_probe(struct virtio_device
>*vdev)
>   goto out_free_vq;
>   }
> 
>+  virtio_mq_reg.queue_depth = queue_depth > 0 ? queue_depth :
>+  (vblk->vq->num_free / 2);
>   virtio_mq_reg.cmd_size =
>   sizeof(struct virtblk_req) +
>   sizeof(struct scatterlist) * sg_elems;
>+  virtblk_name_format("vd", index, vblk->disk->disk_name,
>DISK_NAME_LEN);
>+  pr_info("%s: using queue depth %d\n", vblk->disk->disk_name,
>+  virtio_mq_reg.queue_depth);

Isn't that visible from sysfs?
> 
>   q = vblk->disk->queue = blk_mq_init_queue(_mq_reg, vblk);
>   if (!q) {
>@@ -565,8 +573,6 @@ static int virtblk_probe(struct virtio_device
>*vdev)
> 
>   q->queuedata = vblk;
> 
>-  virtblk_name_format("vd", index, vblk->disk->disk_name,
>DISK_NAME_LEN);
>-
>   vblk->disk->major = major;
>   vblk->disk->first_minor = index_to_minor(index);
>   vblk->disk->private_data = vblk;


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] virtio-blk: make the queue depth the max supportable by the hypervisor

2014-03-15 Thread Konrad Rzeszutek Wilk
On March 14, 2014 11:34:31 PM EDT, Theodore Ts'o ty...@mit.edu wrote:
The current virtio block sets a queue depth of 64, which is
insufficient for very fast devices.  It has been demonstrated that
with a high IOPS device, using a queue depth of 256 can double the
IOPS which can be sustained.

As suggested by Venkatash Srinivas, set the queue depth by default to
be one half the the device's virtqueue, which is the maximum queue
depth that can be supported by the channel to the host OS (each I/O
request requires at least two VQ entries).

Also allow the queue depth to be something which can be set at module
load time or via a kernel boot-time parameter, for
testing/benchmarking purposes.

Signed-off-by: Theodore Ts'o ty...@mit.edu
Signed-off-by: Venkatesh Srinivas venkate...@google.com
Cc: Rusty Russell ru...@rustcorp.com.au
Cc: Michael S. Tsirkin m...@redhat.com
Cc: virtio-...@lists.oasis-open.org
Cc: virtualizat...@lists.linux-foundation.org
Cc: Frank Swiderski f...@google.com
---

This is a combination of my patch and Vekatash's patch.  I agree that
setting the default automatically is better than requiring the user to
set the value by hand.

 drivers/block/virtio_blk.c | 10 --
 1 file changed, 8 insertions(+), 2 deletions(-)

diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c
index 6a680d4..0f70c01 100644
--- a/drivers/block/virtio_blk.c
+++ b/drivers/block/virtio_blk.c
@@ -481,6 +481,9 @@ static struct blk_mq_ops virtio_mq_ops = {
   .free_hctx  = blk_mq_free_single_hw_queue,
 };
 
+static int queue_depth = -1;
+module_param(queue_depth, int, 0444);

?

+
 static struct blk_mq_reg virtio_mq_reg = {
   .ops= virtio_mq_ops,
   .nr_hw_queues   = 1,
@@ -551,9 +554,14 @@ static int virtblk_probe(struct virtio_device
*vdev)
   goto out_free_vq;
   }
 
+  virtio_mq_reg.queue_depth = queue_depth  0 ? queue_depth :
+  (vblk-vq-num_free / 2);
   virtio_mq_reg.cmd_size =
   sizeof(struct virtblk_req) +
   sizeof(struct scatterlist) * sg_elems;
+  virtblk_name_format(vd, index, vblk-disk-disk_name,
DISK_NAME_LEN);
+  pr_info(%s: using queue depth %d\n, vblk-disk-disk_name,
+  virtio_mq_reg.queue_depth);

Isn't that visible from sysfs?
 
   q = vblk-disk-queue = blk_mq_init_queue(virtio_mq_reg, vblk);
   if (!q) {
@@ -565,8 +573,6 @@ static int virtblk_probe(struct virtio_device
*vdev)
 
   q-queuedata = vblk;
 
-  virtblk_name_format(vd, index, vblk-disk-disk_name,
DISK_NAME_LEN);
-
   vblk-disk-major = major;
   vblk-disk-first_minor = index_to_minor(index);
   vblk-disk-private_data = vblk;


--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] virtio-blk: make the queue depth the max supportable by the hypervisor

2014-03-15 Thread Theodore Ts'o
On Sat, Mar 15, 2014 at 06:57:01AM -0400, Konrad Rzeszutek Wilk wrote:
 +pr_info(%s: using queue depth %d\n, vblk-disk-disk_name,
 +virtio_mq_reg.queue_depth);
 
 Isn't that visible from sysfs?

As near as I can tell, it's not.  I haven't been able to find anything
that either represents this value, or can be calculated from this
value.  Maybe I missed something?

- Ted
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] virtio-blk: make the queue depth the max supportable by the hypervisor

2014-03-15 Thread Christoph Hellwig
On Fri, Mar 14, 2014 at 11:34:31PM -0400, Theodore Ts'o wrote:
 The current virtio block sets a queue depth of 64, which is
 insufficient for very fast devices.  It has been demonstrated that
 with a high IOPS device, using a queue depth of 256 can double the
 IOPS which can be sustained.
 
 As suggested by Venkatash Srinivas, set the queue depth by default to
 be one half the the device's virtqueue, which is the maximum queue
 depth that can be supported by the channel to the host OS (each I/O
 request requires at least two VQ entries).

I don't think this should be a module parameter.  The default sizing
should be based of the parameters of the actual virtqueue, and if we
want to allow tuning it it should be by a sysfs attribute, preferable
using the same semantics as SCSI.

--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] virtio-blk: make the queue depth the max supportable by the hypervisor

2014-03-15 Thread Theodore Ts'o
On Sat, Mar 15, 2014 at 06:57:23AM -0700, Christoph Hellwig wrote:
 I don't think this should be a module parameter.  The default sizing
 should be based of the parameters of the actual virtqueue, and if we
 want to allow tuning it it should be by a sysfs attribute, preferable
 using the same semantics as SCSI.

I wanted that too, but looking at the multiqueue code, it wasn't all
obvious how to safely adjust the queue depth once the virtio-blk
device driver is initialized and becomes active.  There are all sorts
data structures including bitmaps, etc. that would have to be resized,
and I decided it would be too difficult / risky for me to make it be
dynamically resizeable.

So I settled on a module parameter thinking it would mostly only used
by testers / benchmarkers.

Can someone suggest a way to do a dynamic resizing of the virtio-blk
queue depth easily / safely?

- Ted
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/