Hi I've been using KVM for a bunch VM's on hardy and now Lucid and with CPU and memory performing quite well its been no problem. I'm now looking at our ageing DB server and wanting to put it in a VM but the disk performance is dismal or am I doing it wrong? I'm quite comfortable if I loose 25% or even 33% through virtualization as the benefits are worth it.
Here are the numbers I have so far (using dbench, ubuntu lucid and
ext4):
Bare metal using a slice for the host OS:
Throughput 2586.65 MB/sec 10 clients 10 procs
max_latency=18.029 ms
Throughput 3631.62 MB/sec 50 clients 50 procs
max_latency=239.773 ms
Throughput 3635.12 MB/sec 100 clients 100 procs
max_latency=458.094 ms
Guest KVM machine using a block device
Throughput 1130.52 MB/sec 10 clients 10 procs
max_latency=262.047 ms
Throughput 513.972 MB/sec 50 clients 50 procs
max_latency=6561.761 ms
Throughput 465.593 MB/sec 100 clients 100 procs
max_latency=2520.585 ms
I tried VMware just as a comparison using a vmdk file (not even a block
device):
Throughput 1482.44 MB/sec 10 clients 10 procs
max_latency=53.682 ms
Throughput 2049.45 MB/sec 50 clients 50 procs
max_latency=492.187 ms
Throughput 2098.71 MB/sec 100 clients 100 procs
max_latency=681.216 ms
Using LVM was worse and Qcow2 was even worse as expected.
Thats a big pill to swallow for KVM.
Any ideas what is the best way to get disk performance using KVM.
Thanks
--
David Peall
Domain Name Services
smime.p7s
Description: S/MIME cryptographic signature
-- ubuntu-server mailing list [email protected] https://lists.ubuntu.com/mailman/listinfo/ubuntu-server More info: https://wiki.ubuntu.com/ServerTeam
