v1: * make seg_max size dependent on virtuqueue size * don't expose seg_max as property * add new machine types with increased queue size * add test to check the new machine types * check queue size for non-modern virtio devices --- From: "Denis V. Lunev" <d...@openvz.org>
Linux guests submit IO requests no longer than PAGE_SIZE * max_seg field reported by SCSI controler. Thus typical sequential read with 1 MB size results in the following pattern of the IO from the guest: 8,16 1 15754 2.766095122 2071 D R 2095104 + 1008 [dd] 8,16 1 15755 2.766108785 2071 D R 2096112 + 1008 [dd] 8,16 1 15756 2.766113486 2071 D R 2097120 + 32 [dd] 8,16 1 15757 2.767668961 0 C R 2095104 + 1008 [0] 8,16 1 15758 2.768534315 0 C R 2096112 + 1008 [0] 8,16 1 15759 2.768539782 0 C R 2097120 + 32 [0] The IO was generated by dd if=/dev/sda of=/dev/null bs=1024 iflag=direct This effectively means that on rotational disks we will observe 3 IOPS for each 2 MBs processed. This definitely negatively affects both guest and host IO performance. The cure is relatively simple - we should report lengthy scatter-gather ability of the SCSI controller. Fortunately the situation here is very good. VirtIO transport layer can accomodate 1024 items in one request while we are using only 128. This situation is present since almost very beginning. 2 items are dedicated for request metadata thus we should publish VIRTQUEUE_MAX_SIZE - 2 as max_seg. The following pattern is observed after the patch: 8,16 1 9921 2.662721340 2063 D R 2095104 + 1024 [dd] 8,16 1 9922 2.662737585 2063 D R 2096128 + 1024 [dd] 8,16 1 9923 2.665188167 0 C R 2095104 + 1024 [0] 8,16 1 9924 2.665198777 0 C R 2096128 + 1024 [0] which is much better. The dark side of this patch is that we are tweaking guest visible parameter, though this should be relatively safe as above transport layer support is present in QEMU/host Linux for a very long time. The patch adds configurable property for VirtIO SCSI with a new default and hardcode option for VirtBlock which does not provide good configurable framework. Unfortunately the commit can not be applied as is. For the real cure we need guest to be fixed to accomodate that queue length, which is done only in the latest 4.14 kernel. Thus we are going to expose the property and tweak it on machine type level. The problem with the old kernels is that they have max_segments <= virtqueue_size restriction which cause the guest crashing in the case of violation. To fix the case described above in the old kernels we can increase virtqueue_size to 256 and max_segments to 254. The pitfall here is that seabios allows the virtqueue_size-s < 128, however, the seabios patch extending that value to 256 is pending. Denis Plotnikov (4): virtio: protect non-modern devices from too big virtqueue size setting virtio: make seg_max virtqueue size dependent virtio: increase virtuqueue sizes in new machine types iotests: add test for virtio-scsi and virtio-blk machine type settings hw/block/virtio-blk.c | 2 +- hw/core/machine.c | 14 ++++ hw/i386/pc_piix.c | 16 +++- hw/i386/pc_q35.c | 14 +++- hw/scsi/virtio-scsi.c | 2 +- hw/virtio/virtio-blk-pci.c | 9 +++ hw/virtio/virtio-scsi-pci.c | 10 +++ include/hw/boards.h | 6 ++ tests/qemu-iotests/267 | 154 ++++++++++++++++++++++++++++++++++++ tests/qemu-iotests/267.out | 1 + tests/qemu-iotests/group | 1 + 11 files changed, 222 insertions(+), 7 deletions(-) create mode 100755 tests/qemu-iotests/267 create mode 100644 tests/qemu-iotests/267.out -- 2.17.0