Hi Jorge
How do you actually enable io_uring via Cloustack? My KVM does have the necessary requirements.
I enabled io.policy settings in global settings, local storage and in the VM settings via UI. And my xml dump of VM doesn’t include io_uring under driver for some reason. --
On 10 Jul 2023, at 5:27 PM, Granwille Strauss <granwi...@namhost.com.invalid> wrote:
Hi Jorge
Thank you so much for this. I used your FIO config and
surprisingly it seems fine:
write-test: (g=0): rw=randrw, bs=(R)
1300MiB-1300MiB, (W) 1300MiB-1300MiB, (T) 1300MiB-1300MiB,
ioengine=libaio, iodepth=1
fio-3.19
Run status group 0 (all jobs):
READ: bw=962MiB/s (1009MB/s), 962MiB/s-962MiB/s
(1009MB/s-1009MB/s), io=3900MiB (4089MB), run=4052-4052msec
WRITE: bw=321MiB/s (336MB/s), 321MiB/s-321MiB/s
(336MB/s-336MB/s), io=1300MiB (1363MB), run=4052-4052msec
This is without enabling io_uring. I see I can enable it per VM
using the UI by setting the io.policy = io_uring. Will enable this
on a few VMs and see if it works better.
On 7/10/23 15:41, Jorge Luiz Correa
wrote:
Hi Granwille! About the READ/WRITE performance, as Levin suggested, check
the XML of virtual machines looking at disk/device section. Look for
io='io_uring'.
As stated here:
https://github.com/apache/cloudstack/issues/4883
CloudStack can use io_uring with Qemu >= 5.0 and Libvirt >= 6.3.0.
I tried to do some tests at some points like your environment.
######################################
VM in NFS Primary Storage (Hybrid NAS)
Default disk offering, thin (no restriction)
<devices>
<emulator>/usr/bin/qemu-system-x86_64</emulator>
<disk type='file' device='disk'>
<driver name='qemu' type='qcow2' cache='none' io='io_uring'/>
<source
file='/mnt/74267a3b-46c5-3f6c-8637-a9f721852954/fb46fd2c-59bd-4127-851b-693a957bd5be'
index='2'/>
<backingStore/>
<target dev='vda' bus='virtio'/>
<serial>fb46fd2c59bd4127851b</serial>
<alias name='virtio-disk0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x05'
function='0x0'/>
</disk>
fio --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test
--filename=/tmp/random_read_write.fio --bs=1300m --iodepth=8 --size=6G
--readwrite=randrw
READ: 569MiB/s
WRITE: 195MiB/s
######################################
VM in Local Primary Storage (local SSD host)
Default disk offering, thin (no restriction)
<devices>
<emulator>/usr/bin/qemu-system-x86_64</emulator>
<disk type='file' device='disk'>
<driver name='qemu' type='qcow2' cache='none' io='io_uring'/>
<source
file='/var/lib/libvirt/images/d100c55d-8ff2-45e5-8452-6fa56c0725e5'
index='2'/>
<backingStore/>
<target dev='vda' bus='virtio'/>
<serial>fb46fd2c59bd4127851b</serial>
<alias name='virtio-disk0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x05'
function='0x0'/>
fio --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test
--filename=/tmp/random_read_write.fio --bs=1300m --iodepth=8 --size=6G
--readwrite=randrw
First run (a little bit slow if using "thin" because need to allocate space
in qcow2):
READ: bw=796MiB/s
WRITE: bw=265MiB/s
Second run:
READ: bw=952MiB/s
WRITE: bw=317MiB/s
##############################
Directly in local SSD of host:
fio --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test
--filename=/tmp/random_read_write.fio --bs=1300m --iodepth=8 --size=6G
--readwrite=randrw
READ: bw=931MiB/s
WRITE: bw=310MiB/s
OBS.: parameters of fio test need to be changed to test in your environment
as it depends on the number of cpus, memory, --bs, --iodepth etc.
Host is running 5.15.0-43 kernel, qemu 6.2 and libvirt 8. CloudStack is
4.17.2. So, VM in local SSD of host could have very similar disk
performance from the host.
I hope this could help you!
Thanks.
--
The
content of this message is confidential. If you have
received it by mistake, please inform us by email reply
and then delete the message. It is forbidden to copy,
forward, or in any way reveal the contents of this message
to anyone without our explicit consent. The integrity and
security of this email cannot be guaranteed over the
Internet. Therefore, the sender will not be held liable
for any damage caused by the message. For our full privacy
policy and disclaimers, please go to
https://www.namhost.com/privacy-policy |
|