Hi,

I am seeing a large discrepancy between host and guest disk bandwidth when
using virtio-blk.
I would love some insight into the best ways to tune virtio-blk for maximum
bandwidth.

For context, the hypervisor is connecting to - remote NVMe/oF TCP target
over a 200 Gbps NIC. On the host I am benchmarking up to 15 GB/s (close to
NIC line rate) read bandwidth with fio. After passing the block device to a
qemu guest with virtio-blk, I can’t get more than 4 GB/s bandwidth.

The guest has 96 vCPUs. I have tried tuning queue depth, queue count, but
nothing has made a difference.  The greatest bandwidth improvement that I
have found has been enabling IOThreads.

What am I missing here?

Disclaimer

The information contained in this communication from the sender is 
confidential. It is intended solely for use by the recipient and others 
authorized to receive it. If you are not the recipient, you are hereby notified 
that any disclosure, copying, distribution or taking action in relation of the 
contents of this information is strictly prohibited and may be unlawful.

This email has been scanned for viruses and malware, and may have been 
automatically archived by Mimecast, a leader in email security and cyber 
resilience. Mimecast integrates email defenses with brand protection, security 
awareness training, web security, compliance and other essential capabilities. 
Mimecast helps protect large and small organizations from malicious activity, 
human error and technology failure; and to lead the movement toward building a 
more resilient world. To find out more, visit our website.

Reply via email to