On 07/17/2012 04:52 PM, Paolo Bonzini wrote:
Il 17/07/2012 10:29, Asias He ha scritto:
So, vhost-blk at least saves ~6 syscalls for us in each request.
Are they really 6? If I/O is coalesced by a factor of 3, for example
(i.e. each exit processes 3 requests), it's really 2 syscalls per request.
Well. I am counting the number of syscalls in one notify and response
process. Sure the IO can be coalesced.
Also, is there anything we can improve? Perhaps we can modify epoll and
ask it to clear the eventfd for us (would save 2 reads)? Or
io_getevents (would save 1)?
I guess you mean qemu here. Yes, in theory, qemu's block layer can be
improved to achieve similar performance as vhost-blk or kvm tool's
userspace virito-blk has. But I think it makes no sense to prevent one
solution becase there is another in theory solution called: we can do
similar in qemu.
It depends. Like vhost-scsi, vhost-blk has the problem of a crippled
feature set: no support for block device formats, non-raw protocols,
etc. This makes it different from vhost-net.
Data-plane qemu also has this cripppled feature set problem, no? Does
user always choose to use block devices format like qcow2? What if they
prefer raw image or raw block device?
So it begs the question, is it going to be used in production, or just a
useful reference tool?
This should be decided by user, I can not speak for them. What is wrong
with adding one option for user which they can decide?
--
Asias
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization