[prior attempts from elsewhere kept bouncing, apologies for any replication]

Gordan Bobic wrote:
> The test is building the Linux kernel (only taking the second run to give the 
> test the benefit of local cache):
>
> make clean; make -j8 all; make clean; sync; time make -j8 all
>
> This takes about 10 minutes with IDE disk emulation and about 13 minutes with 
> virtio. I ran the tests multiple time with most non-essential services on the 
> host switched off (including cron/atd), and the guest in single-user mode to 
> reduce the "noise" in the test to the minimum, and the results are pretty 
> consistent, with virtio being about 30% behind.

I'd expect for an observed 30% wall clock time difference
of an operation as complex as a kernel build the base i/o
throughput disparity is substantially greater.  Did you
try a more simple/regular load, eg: a streaming dd read
of various block sizes from guest raw disk devices?
This is also considerably easier to debug vs. the complex
i/o load generated by a build.

One way to chop up the problem space is using blktrace
on the host to observe both the i/o patterns coming out
of qemu and the host's response to them in terms of
turn around time.  I expect you'll see somewhat different
nature requests generated by qemu w/r/t blocking and
number of threads serving virtio_blk requests relative
to ide but the host response should be essentially the
same in terms of data returned per unit time.

If the host looks to be turning around i/o request with
similar latency in both cases, the problem would be lower
frequency of requests generated by qemu in the case of
virtio_blk.   Here it would be useful to know the host
load generated by the guest for both cases.

-john


-- 
john.coo...@redhat.com
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to