OK, Thank you very much for your detailed explanation.

But I have another question about the big change from qemu-1.5.3 to 
qemu-1.6.0-rc0.

When I use ramdisk for IO performance testing, the result is as follows.

[fio-test]          rw         bs         iodepth    jobs       bw         iops
------------------------------------------------------------------------------------
qemu-1.5.3          read       4k         32          1         285MB/s    73208
qemu-1.6.0-rc0      read       4k         32          1         253MB/s    64967

And virtio-blk is the same.

I know there are so many differences between qemu-1.5 and qemu-1.6, but I am 
confused about
what new features impact the performance so much. Do you know it?


On 2015-2-3 16:49, Paolo Bonzini wrote:
> On 03/02/2015 03:56, Wangting (Kathy) wrote:
>> Sorry, I find that the patch of "virtio-scsi: Optimize virtio_scsi_init_req" 
>> can slove this problem.
> 
> Great that you could confirm that. :)
> 
>> By the way, can you tell me the reason of the change about cdb and sense?
> 
> cdb and sense are variable-size items.  ANY_LAYOUT support changed
> VirtIOSCSIReq: instead of having a pointer to the request, it copies the
> request from guest memory into VirtIOSCSIReq.  This is required because
> the request might not be contiguous in guest memory.  And because the
> request and response headers (e.g. VirtIOSCSICmdReq and
> VirtIOSCSICmdResp) are included by value in VirtIOSCSIReq, the
> variable-sized fields have to be treated specially.
> 
> Only one of them can remain in VirtIOSCSIReq, because you cannot have a
> flexible array member (e.g. "uint_8 sense[];") in the middle of a struct.
> 
> cdb is always used, so it is chosen for the variable-sized part of
> VirtIOSCSIReq: cdb was simply moved from VirtIOSCSICmdReq to VirtIOSCSIReq.
> 
> Instead, requests that complete with sense data are not a fast path.
> Hence sense is retrieved from the SCSIRequest, and
> virtio_scsi_command_complete copies it into the guest buffer via
> scsi_req_get_sense + qemu_iovec_from_buf.
> 
> Paolo
> 
> 


Reply via email to