Re: bad virtio disk performance

2009-04-28 Thread Lucas Nussbaum
On 28/04/09 at 14:55 +0300, Avi Kivity wrote:
> Lucas Nussbaum wrote:
>> On 28/04/09 at 12:56 +0200, Lucas Nussbaum wrote:
>>   
>>> I then upgraded to kvm-85 (both the host kernel modules and the
>>> userspace), and re-ran the tests. Performance is better (about 85 MB/s),
>>> but still very far from the non-virtio case.
>>> 
>>
>> I forgot to mention that the strangest result I got was the total amount
>> of write blocks queued (as measured by blktrace). I was writing a 1 GB
>> file to disk, which resulted in:
>>
>> - 1 GB of write blocks queued without virtio
>> - ~1.7 GB of write blocks queued with virtio on kvm 84
>> - ~1.4 GB of write blocks queued with virtio on kvm 85
>>
>> I don't understand with kvm with virtio writes "more blocks than
>> necessary", but that could explain the performance difference.
>
> Are these numbers repeatable?

The fact that more data than necessary is written to disk with virtio is
reproducible. The exact amount of additional data varies between runs.

> Try increasing the virtio queue depth. See the call to  
> virtio_add_queue() in qemu/hw/virtio-blk.c.

It doesn't seem to change the performance I get (but since the
performance itself is varying a lot, it's difficult to tell).

Some example data points, writing a 500 MiB file:
1st run, with virtio queue length = 512
  - total size of write req queued: 874568 KiB
  - 55 MB/s
2nd run, with virtio queue length = 128
  - total size of write req queued: 694328 KiB
  - 86 MB/s
-- 
| Lucas Nussbaum
| lu...@lucas-nussbaum.net   http://www.lucas-nussbaum.net/ |
| jabber: lu...@nussbaum.fr GPG: 1024D/023B3F4F |
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: bad virtio disk performance

2009-04-28 Thread Avi Kivity

Lucas Nussbaum wrote:

On 28/04/09 at 12:56 +0200, Lucas Nussbaum wrote:
  

I then upgraded to kvm-85 (both the host kernel modules and the
userspace), and re-ran the tests. Performance is better (about 85 MB/s),
but still very far from the non-virtio case.



I forgot to mention that the strangest result I got was the total amount
of write blocks queued (as measured by blktrace). I was writing a 1 GB
file to disk, which resulted in:

- 1 GB of write blocks queued without virtio
- ~1.7 GB of write blocks queued with virtio on kvm 84
- ~1.4 GB of write blocks queued with virtio on kvm 85

I don't understand with kvm with virtio writes "more blocks than
necessary", but that could explain the performance difference.
  


Are these numbers repeatable?

Try increasing the virtio queue depth. See the call to 
virtio_add_queue() in qemu/hw/virtio-blk.c.


--
I have a truly marvellous patch that fixes the bug which this
signature is too narrow to contain.

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: bad virtio disk performance

2009-04-28 Thread Lucas Nussbaum
On 28/04/09 at 12:56 +0200, Lucas Nussbaum wrote:
> I then upgraded to kvm-85 (both the host kernel modules and the
> userspace), and re-ran the tests. Performance is better (about 85 MB/s),
> but still very far from the non-virtio case.

I forgot to mention that the strangest result I got was the total amount
of write blocks queued (as measured by blktrace). I was writing a 1 GB
file to disk, which resulted in:

- 1 GB of write blocks queued without virtio
- ~1.7 GB of write blocks queued with virtio on kvm 84
- ~1.4 GB of write blocks queued with virtio on kvm 85

I don't understand with kvm with virtio writes "more blocks than
necessary", but that could explain the performance difference.
-- 
| Lucas Nussbaum
| lu...@lucas-nussbaum.net   http://www.lucas-nussbaum.net/ |
| jabber: lu...@nussbaum.fr GPG: 1024D/023B3F4F |
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: bad virtio disk performance

2009-04-28 Thread Lucas Nussbaum
On 27/04/09 at 19:40 -0400, john cooper wrote:
> Lucas Nussbaum wrote:
>> On 27/04/09 at 13:36 -0400, john cooper wrote:
>>> Lucas Nussbaum wrote:
>>
>> non-virtio:
>> kvm -drive file=/tmp/debian-amd64.img,if=scsi,cache=writethrough -net
>> nic,macaddr=00:16:3e:5a:28:1,model=e1000 -net tap -nographic -kernel
>> /boot/vmlinuz-2.6.29 -initrd /boot/initrd.img-2.6.29 -append
>> root=/dev/sda1 ro console=tty0 console=ttyS0,9600,8n1
>>
>> virtio:
>> kvm -drive file=/tmp/debian-amd64.img,if=virtio,cache=writethrough -net
>> nic,macaddr=00:16:3e:5a:28:1,model=e1000 -net tap -nographic -kernel
>> /boot/vmlinuz-2.6.29 -initrd /boot/initrd.img-2.6.29 -append
>> root=/dev/vda1 ro console=tty0 console=ttyS0,9600,8n1
>>
> One suggestion would be to use a separate drive
> for the virtio vs. non-virtio comparison to avoid
> a Heisenberg effect.

I don't have another drive available, but tried to output the trace over
the network. Results were the same.

>> So, apparently, with virtio, there's a lot more data being written to
>> disk. The underlying filesystem is ext3, and is monted as /tmp. It only
>> contains the VM image file. Another difference is that, with virtio, the
>> data was shared equally over all 4 CPUs, with without virt-io, CPU0 and
>> CPU1 did all the work.
>> In the virtio log, I also see a (null) process doing a lot of writes.
> Can't say what is causing that -- only took a look
> at the short logs. However the isolation suggested
> above may help factor that out if you need to
> pursue this path.
>>
>> I uploaded the logs to http://blop.info/bazaar/virtio/, if you want to
>> take a look.
> In the virtio case i/o is being issued from multiple
> threads. You could be hitting the cfq close-cooperator
> bug we saw as late as 2.6.28.
>
> A quick test to rule this in/out would be to change
> the block scheduler to other than cfq for the host
> device where the backing image resides -- in your
> case the host device containing /tmp/debian-amd64.img.
>
> Eg for /dev/sda1:
>
> # cat /sys/block/sda/queue/scheduler
> noop anticipatory deadline [cfq]
> # echo deadline > /sys/block/sda/queue/scheduler
> # cat /sys/block/sda/queue/scheduler
> noop anticipatory [deadline] cfq

Tried that (also with noop and anticipatory), but didn't result in any
improvement.

I then upgraded to kvm-85 (both the host kernel modules and the
userspace), and re-ran the tests. Performance is better (about 85 MB/s),
but still very far from the non-virtio case.

Any other suggestions ?
-- 
| Lucas Nussbaum
| lu...@lucas-nussbaum.net   http://www.lucas-nussbaum.net/ |
| jabber: lu...@nussbaum.fr GPG: 1024D/023B3F4F |
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: bad virtio disk performance

2009-04-27 Thread john cooper

Lucas Nussbaum wrote:

On 27/04/09 at 13:36 -0400, john cooper wrote:

Lucas Nussbaum wrote:


non-virtio:
kvm -drive file=/tmp/debian-amd64.img,if=scsi,cache=writethrough -net
nic,macaddr=00:16:3e:5a:28:1,model=e1000 -net tap -nographic -kernel
/boot/vmlinuz-2.6.29 -initrd /boot/initrd.img-2.6.29 -append
root=/dev/sda1 ro console=tty0 console=ttyS0,9600,8n1

virtio:
kvm -drive file=/tmp/debian-amd64.img,if=virtio,cache=writethrough -net
nic,macaddr=00:16:3e:5a:28:1,model=e1000 -net tap -nographic -kernel
/boot/vmlinuz-2.6.29 -initrd /boot/initrd.img-2.6.29 -append
root=/dev/vda1 ro console=tty0 console=ttyS0,9600,8n1


One suggestion would be to use a separate drive
for the virtio vs. non-virtio comparison to avoid
a Heisenberg effect.



So, apparently, with virtio, there's a lot more data being written to
disk. The underlying filesystem is ext3, and is monted as /tmp. It only
contains the VM image file. Another difference is that, with virtio, the
data was shared equally over all 4 CPUs, with without virt-io, CPU0 and
CPU1 did all the work.
In the virtio log, I also see a (null) process doing a lot of writes.

Can't say what is causing that -- only took a look
at the short logs. However the isolation suggested
above may help factor that out if you need to
pursue this path.


I uploaded the logs to http://blop.info/bazaar/virtio/, if you want to
take a look.

In the virtio case i/o is being issued from multiple
threads. You could be hitting the cfq close-cooperator
bug we saw as late as 2.6.28.

A quick test to rule this in/out would be to change
the block scheduler to other than cfq for the host
device where the backing image resides -- in your
case the host device containing /tmp/debian-amd64.img.

Eg for /dev/sda1:

# cat /sys/block/sda/queue/scheduler
noop anticipatory deadline [cfq]
# echo deadline > /sys/block/sda/queue/scheduler
# cat /sys/block/sda/queue/scheduler
noop anticipatory [deadline] cfq


And re-run your test to see if this brings
virtio vs. non-virtio closer to the expected
performance.

-john

--
john.coo...@redhat.com

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: bad virtio disk performance

2009-04-27 Thread Lucas Nussbaum
On 27/04/09 at 13:36 -0400, john cooper wrote:
> Lucas Nussbaum wrote:
>> Hi,
>>
>> I'm experiencing bad disk I/O performance using virtio disks.
>>
>> I'm using Linux 2.6.29 (host & guest), kvm 84 userspace.
>> On the host, and in a non-virtio guest, I get ~120 MB/s when writing
>> with dd (the disks are fast RAID0 SAS disks).
>
> Could you provide detail of the exact type and size
> of i/o load you were creating with dd?

I tried with various block sizes. An example invocation would be:
dd if=/dev/zero of=foo bs=4096 count=262144 conv=fsync (126 MB/s without
virtio, 32 MB/s with virtio).

> Also the full qemu cmd line invocation in both
> cases would be useful.

non-virtio:
kvm -drive file=/tmp/debian-amd64.img,if=scsi,cache=writethrough -net
nic,macaddr=00:16:3e:5a:28:1,model=e1000 -net tap -nographic -kernel
/boot/vmlinuz-2.6.29 -initrd /boot/initrd.img-2.6.29 -append
root=/dev/sda1 ro console=tty0 console=ttyS0,9600,8n1

virtio:
kvm -drive file=/tmp/debian-amd64.img,if=virtio,cache=writethrough -net
nic,macaddr=00:16:3e:5a:28:1,model=e1000 -net tap -nographic -kernel
/boot/vmlinuz-2.6.29 -initrd /boot/initrd.img-2.6.29 -append
root=/dev/vda1 ro console=tty0 console=ttyS0,9600,8n1


>> In a guest with a virtio disk, I get at most ~32 MB/s.
>
> Which non-virtio interface was used for the
> comparison?

if=ide
I got the same performance with if=scsi

>> The rest of the setup is the same. For reference, I'm running kvm -drive
>> file=/tmp/debian-amd64.img,if=virtio.
>>
>> Is such performance expected? What should I check?
>
> Not expected, something is awry.
>
> blktrace(8) run on the host will shed some light
> on the type of i/o requests issued by qemu in both
> cases.

Ah, I found something interesting. btrace summary after writing a 1 GB
file:
--- without virtio:
Total (8,5):
 Reads Queued:   0,   0KiBWrites Queued:  272259, 1089MiB
 Read Dispatches:0,   0KiBWrite Dispatches: 9769, 1089MiB
 Reads Requeued: 0Writes Requeued: 0
 Reads Completed:0,   0KiBWrites Completed: 9769, 1089MiB
 Read Merges:0,   0KiBWrite Merges:   262490, 1049MiB
 IO unplugs: 45973Timer unplugs:  30
--- with virtio:
Total (8,5):
 Reads Queued:   1,   4KiBWrites Queued:  430734, 1776MiB
 Read Dispatches:1,   4KiBWrite Dispatches:   196143, 1776MiB
 Reads Requeued: 0Writes Requeued: 0
 Reads Completed:1,   4KiBWrites Completed:   196143, 1776MiB
 Read Merges:0,   0KiBWrite Merges:   234578, 938488KiB
 IO unplugs:301311Timer unplugs:  25
(I re-ran the test twice, got similar results)

So, apparently, with virtio, there's a lot more data being written to
disk. The underlying filesystem is ext3, and is monted as /tmp. It only
contains the VM image file. Another difference is that, with virtio, the
data was shared equally over all 4 CPUs, with without virt-io, CPU0 and
CPU1 did all the work.
In the virtio log, I also see a (null) process doing a lot of writes.

I uploaded the logs to http://blop.info/bazaar/virtio/, if you want to
take a look.

Thank you,

- Lucas
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: bad virtio disk performance

2009-04-27 Thread john cooper

Lucas Nussbaum wrote:

Hi,

I'm experiencing bad disk I/O performance using virtio disks.

I'm using Linux 2.6.29 (host & guest), kvm 84 userspace.
On the host, and in a non-virtio guest, I get ~120 MB/s when writing
with dd (the disks are fast RAID0 SAS disks).


Could you provide detail of the exact type and size
of i/o load you were creating with dd?

Also the full qemu cmd line invocation in both
cases would be useful.


In a guest with a virtio disk, I get at most ~32 MB/s.


Which non-virtio interface was used for the
comparison?


The rest of the setup is the same. For reference, I'm running kvm -drive
file=/tmp/debian-amd64.img,if=virtio.

Is such performance expected? What should I check?


Not expected, something is awry.

blktrace(8) run on the host will shed some light
on the type of i/o requests issued by qemu in both
cases.

-john


--
john.coo...@third-harmonic.com
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


bad virtio disk performance

2009-04-27 Thread Lucas Nussbaum
Hi,

I'm experiencing bad disk I/O performance using virtio disks.

I'm using Linux 2.6.29 (host & guest), kvm 84 userspace.
On the host, and in a non-virtio guest, I get ~120 MB/s when writing
with dd (the disks are fast RAID0 SAS disks).

In a guest with a virtio disk, I get at most ~32 MB/s.

The rest of the setup is the same. For reference, I'm running kvm -drive
file=/tmp/debian-amd64.img,if=virtio.

Is such performance expected? What should I check?

Thank you,
-- 
| Lucas Nussbaum
| lu...@lucas-nussbaum.net   http://www.lucas-nussbaum.net/ |
| jabber: lu...@nussbaum.fr GPG: 1024D/023B3F4F |
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html