On 18/06/2014 06:04, Ming Lei wrote:
> For virtio-blk, I don't think it is always better to take more queues, and
> we need to leverage below things in host side:
> 
> - host storage top performance, generally it reaches that with more
> than 1 jobs with libaio(suppose it is N, so basically we can use N
> iothread per device in qemu to try to get top performance)
> 
> - iothreads' loading(if iothreads are at full loading, increasing
> queues doesn't help at all)
> 
> In my test, I only use the current per-dev iothread(x-dataplane)
> in qemu to handle 2 vqs' notification and precess all I/O from
> the 2 vqs, and looks it can improve IOPS by ~30%.
> 
> For virtio-scsi, the current usage doesn't make full use of blk-mq's
> advantage too because only one vq is active at the same time, so I
> guess the multi vqs' benefit won't be very much and I'd like to post
> patches to support that first, then provide test data with
> more queues(8, 16).

Hi Ming Lei,

would you like to repost these patches now that MQ support is in the kernel?

Also, I changed my mind about moving linux-aio to AioContext.  I now
think it's a good idea, because it limits the number of io_getevents
syscalls. O:-)  So I would be happy to review your patches for that as well.

Paolo
--
To unsubscribe from this list: send the line "unsubscribe linux-api" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to