Hi All,

While in lockdown I decided to do some performance testing on KVM. I had believed that passing a block device through to a guest rather than using a QCOW2 file would get better performance. I wanted to see whether that was true and indeed whether using iSCSI storage was any better/worse.

My test hardware is quite modest and this may adversely have affected what I measured. The processor is a Intel Core2 6300  @ 1.86GHz with VT-X support. It shows 3733 Bogomips at startup. There's 8GB RAM and an Intel 82801HB SATA controller on a Gigabyte MB. The disks are two 3TB SATA 7200RPM set up with a Raid 1 LVM Ext3 partition as well as other non-Raid partitions to use to test.

I used Fedora 32 as the KVM host and my testing was with Centos 8 as a guest.

On the host I got 60MB/s write and 143 MB/s read on Raid1/LVM/Ext3. I wrote/read 10GB files using dd. 10Gb so as to overflow any memory based caching. Without LVM that changed to 80 MB/s write and 149 MB/s read.

I tried all kinds of VM setups. Normal QCOW2, pass though of block devices Raid/LVM and Non-Raid/LVM. I consistently got around 14.5 MB/s write and 16.5 MB/s read. Similar figures with iSCSI operating from both file based devices and block devices on the same host. The best I got by tweaking the performance settings in KVM was a modest improvement to 15 MB/s write and 17 MB/s read.

As a reference point I did a test on a configuration that has Centos 6 on Hyper-V on an HP ML350 with SATA 7200 rpm disks. I appreciate that's much more capable hardware, although SATA rather than SAS, but I measured 176 MB/s write and 331 MB/s read. That system is using a file on the underlying NTFS file system to provide a block device to the Centos 6 VM.

I also tried booting the C8 guest via iSCSI on a Centos6 Laptop, which worked fine on a 1G network. I measured 16.8 MB/s write and 23.1 MB/s read that way.

I noticed an increase in processor load while running my DD tests, although I didn't take any actual measurements.

What to conclude? Is the hardware just not fast enough? Are newer processors better at abstracting the VM guests with less performance impact? What am I missing??

Any thoughts from virtualisation experts here most welcome.

Thanks

Ken



--
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.


--
GLLUG mailing list
GLLUG@mailman.lug.org.uk
https://mailman.lug.org.uk/mailman/listinfo/gllug

Reply via email to