On 08/02/2010 12:15 PM, John Leach wrote:
Hi,

I've come across a problem with read and write disk IO performance when
using O_DIRECT from within a kvm guest.  With O_DIRECT, reads and writes
are much slower with smaller block sizes.  Depending on the block size
used, I've seen 10 times slower.

For example, with an 8k block size, reading directly from /dev/vdb
without O_DIRECT I see 750 MB/s, but with O_DIRECT I see 79 MB/s.

As a comparison, reading in O_DIRECT mode in 8k blocks directly from the
backend device on the host gives 2.3 GB/s.  Reading in O_DIRECT mode
from a xen guest on the same hardware manages 263 MB/s.

Stefan has a few fixes for this behavior that help a lot. One of them (avoiding memset) is already upstream but not in 0.12.x.

The other two are not done yet but should be on the ML in the next couple weeks. They involve using ioeventfd for notification and unlocking the block queue lock while doing a kick notification.

Regards,

Anthony Liguori

Writing is affected in the same way, and exhibits the same behaviour
with O_SYNC too.

Watching with vmstat on the host, I see the same number of blocks being
read, but about 14 times the number of context switches in O_DIRECT mode
(4500 cs vs. 63000 cs) and a little more cpu usage.

The device I'm writing to is a device-mapper zero device that generates
zeros on read and throws away writes, you can set it up
at /dev/mapper/zero like this:

echo "0 21474836480 zero" | dmsetup create zero

My libvirt config for the disk is:

<disk type='block' device='disk'>
   <driver cache='none'/>
   <source dev='/dev/mapper/zero'/>
   <target dev='vdb' bus='virtio'/>
   <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
</disk>

which translates to the kvm arg:

-device
virtio-blk-pci,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0 
-drive file=/dev/mapper/zero,if=none,id=drive-virtio-disk1,cache=none

I'm testing with dd:

dd if=/dev/vdb of=/dev/null bs=8k iflag=direct

As a side note, as you increase the block size read performance in
O_DIRECT mode starts to overtake non O_DIRECT mode reads (from about
150k block size). By 550k block size I'm seeing 1 GB/s reads with
O_DIRECT and 770 MB/s without.

Of course I see this performance situation with real disks too, I just
wanted to rule out the variables of moving metal around.

I get the same situation on Centos 5.5 and the latest RHEL 6 beta (which
is kvm 0.12 and kernel 2.6.32).  Hardware is a Dell i510 with 64GB RAM
and 12 Intel(R) Xeon(R) L5640 2.27GHz CPUs (running only one kvm guest
with 1G ram).  Host disk scheduler is deadline, guest disk scheduler is
noop.

Guest distro is Ubuntu Lucid, 2.6.32-22-server. I've tried with both
32bit pae and 64bit guest kernels.

Anyone got any thoughts on this?

Thanks,

John.


--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to