On 01/11/2010 03:42 PM, Christoph Hellwig wrote:
On Mon, Jan 11, 2010 at 10:30:53AM +0200, Avi Kivity wrote:
The patch has potential to reduce performance on volumes with multiple
spindles. Consider two processes issuing sequential reads into a RAID
array. With this patch, the reads will be executed sequentially rather
than in parallel, so I think a follow-on patch to make the minimum depth
a parameter (set by the guest? the host?) would be helpful.
Let's think about the life cycle of I/O requests a bit.
We have an idle virtqueue (aka one virtio-blk device). The first (read)
request comes in, we get the virtio notify from the guest, which calls
into virtio_blk_handle_output. With the new code we now disable the
notify once we start processing the first request. If the second
request hits the queue before we call into virtio_blk_get_request
the second time we're fine even with the new code as we keep picking it
up. If however it hits after we leave virtio_blk_handle_output, but
before we complete the first request we do indeed introduce additional
latency.
So instead of disabling notify while requests are active we might want
to only disable it while we are inside virtio_blk_handle_output.
Something like the following minimally tested patch:
Index: qemu/hw/virtio-blk.c
===================================================================
--- qemu.orig/hw/virtio-blk.c 2010-01-11 14:28:42.896010503 +0100
+++ qemu/hw/virtio-blk.c 2010-01-11 14:40:13.535256353 +0100
@@ -328,7 +328,15 @@ static void virtio_blk_handle_output(Vir
int num_writes = 0;
BlockDriverState *old_bs = NULL;
+ /*
+ * While we are processing requests there is no need to get further
+ * notifications from the guest - it'll just burn cpu cycles doing
+ * useless context switches into the host.
+ */
+ virtio_queue_set_notification(s->vq, 0);
+
while ((req = virtio_blk_get_request(s))) {
+handle_request:
if (req->elem.out_num< 1 || req->elem.in_num< 1) {
fprintf(stderr, "virtio-blk missing headers\n");
exit(1);
@@ -358,6 +366,18 @@ static void virtio_blk_handle_output(Vir
}
}
+ /*
+ * Once we're done processing all pending requests re-enable the queue
+ * notification. If there's an entry pending after we enabled
+ * notification again we hit a race and need to process it before
+ * returning.
+ */
+ virtio_queue_set_notification(s->vq, 1);
+ req = virtio_blk_get_request(s);
+ if (req) {
+ goto handle_request;
+ }
+
I don't think this will have much effect. First, the time spent in
virtio_blk_handle_output() is a small fraction of total guest time, so
the probability of the guest hitting the notifications closed window is
low. Second, while we're in that function, the vcpu that kicked us is
stalled, and other vcpus are likely to be locked out of the queue by the
guest.
--
error compiling committee.c: too many arguments to function