On Thu, Jun 26, 2014 at 11:29 PM, Paolo Bonzini <pbonz...@redhat.com> wrote: > Il 26/06/2014 17:14, Ming Lei ha scritto: > >> Hi Stefan, >> >> I found VM block I/O thoughput is decreased by more than 40% >> on my laptop, and looks much worsen in my server environment, >> and it is caused by your commit 580b6b2aa2: >> >> dataplane: use the QEMU block layer for I/O >> >> I run fio with below config to test random read: >> >> [global] >> direct=1 >> size=4G >> bsrange=4k-4k >> timeout=20 >> numjobs=4 >> ioengine=libaio >> iodepth=64 >> filename=/dev/vdc >> group_reporting=1 >> >> [f] >> rw=randread >> >> Together with throughput drop, the latency is improved a little. >> >> With this commit, I/O block submitted to fs becomes much smaller >> than before, and more io_submit() need to be called to kernel, that >> means iodepth may become much less. >> >> I am not surprised with the result since I did compare VM I/O >> performance between qemu and lkvm before, which has no big qemu >> lock problem and handle I/O in a dedicated thread, but lkvm's block >> IO is still much worse than qemu from view of throughput, because >> lkvm doesn't submit block I/O at batch like the way of previous >> dataplane, IMO. > > > What is your elevator setting in both the host and the guest? Usually > deadline gives the best performance.
The test is based on cfq, but I just run a quick test with deadline, looks no obvious difference. Thanks, -- Ming Lei