On Tue, Apr 10, 2012 at 6:25 PM, Michael Baysek <mbay...@liquidweb.com> wrote:
> Well, I'm trying to determine which I/O method currently has the very least 
> performance overhead and gives the best performance for both reads and writes.
>
> I am doing my testing by putting the entire guest onto a ramdisk.  I'm 
> working on an i5-760 with 16GB RAM with VT-d enabled.  I am running the 
> standard Centos 6 kernel with 0.12.1.2 release of qemu-kvm that comes stock 
> on Centos 6.  The guest is configured with 512 MB RAM, using, 4 cpu cores 
> with it's /dev/vda being the ramdisk on the host.

Results collected for ramdisk usually do not reflect the performance
you get with a real disk or SSD.  I suggest using the host/guest
configuration you want to deploy.

> I've been using iozone 3.98 with -O -l32 -i0 -i1 -i2 -e -+n -r4K -s250M to 
> measure performance.

I haven't looked up the options but I think you need -I to use
O_DIRECT and bypass the guest page cache - otherwise you are not
benchmarking I/O performance but overall file system/page cache
performance.

Stefan
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to