Hi!

I hope this is the right place to ask. :)

On a rather recent x86_64 server I'm facing very bad write performance.
The Server is a 8 Core Xeon E5 with 64GiB ram.

Storage is a ext4 filesystem on top of LVM which is backed by DRBD.
On the host side dd can easily write with 100MiB/s to the ext4.
OS is Centos6 with kernel 3.12.x.

Within a KVM Linux guest the seq write throughput is always only
between 20 and 30MiB/s.
The guest OS is Centos6, it uses virtio-blk, cache=none, io=natvie and
the deadline IO scheduler.

The worst thing is that the total IO bandwidth of KVM seems to 30MiB/s.
If I run the same write benchmark within 5 guests each one achieves
only 6 or 7 MiB/s.
I see the same values also if the guest writes directly to a disk like vdb.
Having the guest disk directly on LVM instead of a ext4 file also didn't help.
It really looks like 30MiB/s is the upper bound for KVM disk IO.

Are these values expected for my setup?
I'm also interested where the bottleneck is in the kernel. perf top did not
gave me any clue so far. Also with all guests running the benchmark
the load is below 1.

-- 
Thanks,
//richard
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to