[Bug 1856335] Re: Cache Layout wrong on many Zen Arch CPUs
Yep, I read the Reddit thread, had no idea this was possible. Still, both solutions are ugly workarounds and it would be nice to fix this properly. But at least I don't have to patch and compile QEMU on my own anymore. -- You received this bug notification because you are a member of qemu- devel-ml, which is subscribed to QEMU. https://bugs.launchpad.net/bugs/1856335 Title: Cache Layout wrong on many Zen Arch CPUs Status in QEMU: New Bug description: AMD CPUs have L3 cache per 2, 3 or 4 cores. Currently, TOPOEXT seems to always map Cache ass if it was an 4-Core per CCX CPU, which is incorrect, and costs upwards 30% performance (more realistically 10%) in L3 Cache Layout aware applications. Example on a 4-CCX CPU (1950X /w 8 Cores and no SMT): EPYC-IBPB AMD In windows, coreinfo reports correctly: Unified Cache 1, Level 3,8 MB, Assoc 16, LineSize 64 Unified Cache 6, Level 3,8 MB, Assoc 16, LineSize 64 On a 3-CCX CPU (3960X /w 6 cores and no SMT): EPYC-IBPB AMD in windows, coreinfo reports incorrectly: -- Unified Cache 1, Level 3,8 MB, Assoc 16, LineSize 64 ** Unified Cache 6, Level 3,8 MB, Assoc 16, LineSize 64 Validated against 3.0, 3.1, 4.1 and 4.2 versions of qemu-kvm. With newer Qemu there is a fix (that does behave correctly) in using the dies parameter: The problem is that the dies are exposed differently than how AMD does it natively, they are exposed to Windows as sockets, which means, that if you are nto a business user, you can't ever have a machine with more than two CCX (6 cores) as consumer versions of Windows only supports two sockets. (Should this be reported as a separate bug?) To manage notifications about this bug go to: https://bugs.launchpad.net/qemu/+bug/1856335/+subscriptions
[Bug 1856335] Re: Cache Layout wrong on many Zen Arch CPUs
The problem is caused by the fact that with Ryzen CPUs with disabled cores, the APIC IDs are not sequential on host - in order for cache topology to be configured properly, there is a 'hole' in APIC ID and core ID numbering (I have added full output of cpuid for my 3900X). Unfortunately, adding holes to the numbering is the only way to achieve what is needed for 3 cores per CCX as CPUID Fn8000_001D_EAX NumSharingCache parameter rounds to powers of two (for Ryzen 3100 with 2 cores per CCX, lowering NumSharingCache should also work, correctly setting the L3 cache cores with their IDs still being sequential). A small hack in x86_apicid_from_topo_ids() in include/hw/i386/topology.h can introduce a correct numbering (at least if you do not have epyc set as your cpu, then _epyc variant of the functions are used). But to fix this properly will probably require some thought - maybe introduce the ability to assign APIC IDs directly somehow? Or the ability to specify the 'holes' somehow in the -smt param, or maybe -cpu host,topoext=on should do this automatically? I don't know... e.g. For 3 core per CCX CPUs, to fix this, at include/hw/i386/topology.h:220 change: (topo_ids->core_id << apicid_core_offset(topo_info)) | to ((topo_ids->core_id + (topo_ids->core_id / 3)) << apicid_core_offset(topo_info)) | The cache topology is now correct (-cpu host,topoext=on,hv-time,hv-relaxed,hv-vapic,hv-spinlocks=0x1fff,host-cache-info=on -smp 18,sockets=1,dies=1,cores=9,threads=2), even in Windows: Logical Processor to Cache Map: ** Data Cache 0, Level 1, 32 KB, Assoc 8, LineSize 64 ** Instruction Cache 0, Level 1, 32 KB, Assoc 8, LineSize 64 ** Unified Cache 0, Level 2, 512 KB, Assoc 8, LineSize 64 ** Unified Cache 1, Level 3, 16 MB, Assoc 16, LineSize 64 --**-- Data Cache 1, Level 1, 32 KB, Assoc 8, LineSize 64 --**-- Instruction Cache 1, Level 1, 32 KB, Assoc 8, LineSize 64 --**-- Unified Cache 2, Level 2, 512 KB, Assoc 8, LineSize 64 ** Data Cache 2, Level 1, 32 KB, Assoc 8, LineSize 64 ** Instruction Cache 2, Level 1, 32 KB, Assoc 8, LineSize 64 ** Unified Cache 3, Level 2, 512 KB, Assoc 8, LineSize 64 --**-- Data Cache 3, Level 1, 32 KB, Assoc 8, LineSize 64 --**-- Instruction Cache 3, Level 1, 32 KB, Assoc 8, LineSize 64 --**-- Unified Cache 4, Level 2, 512 KB, Assoc 8, LineSize 64 --**-- Unified Cache 5, Level 3, 16 MB, Assoc 16, LineSize 64 ** Data Cache 4, Level 1, 32 KB, Assoc 8, LineSize 64 ** Instruction Cache 4, Level 1, 32 KB, Assoc 8, LineSize 64 ** Unified Cache 6, Level 2, 512 KB, Assoc 8, LineSize 64 --**-- Data Cache 5, Level 1, 32 KB, Assoc 8, LineSize 64 --**-- Instruction Cache 5, Level 1, 32 KB, Assoc 8, LineSize 64 --**-- Unified Cache 7, Level 2, 512 KB, Assoc 8, LineSize 64 ** Data Cache 6, Level 1, 32 KB, Assoc 8, LineSize 64 ** Instruction Cache 6, Level 1, 32 KB, Assoc 8, LineSize 64 ** Unified Cache 8, Level 2, 512 KB, Assoc 8, LineSize 64 ** Unified Cache 9, Level 3, 16 MB, Assoc 16, LineSize 64 ** Attachment added: "cpuid.txt" https://bugs.launchpad.net/qemu/+bug/1856335/+attachment/5383184/+files/cpuid.txt -- You received this bug notification because you are a member of qemu- devel-ml, which is subscribed to QEMU. https://bugs.launchpad.net/bugs/1856335 Title: Cache Layout wrong on many Zen Arch CPUs Status in QEMU: New Bug description: AMD CPUs have L3 cache per 2, 3 or 4 cores. Currently, TOPOEXT seems to always map Cache ass if it was an 4-Core per CCX CPU, which is incorrect, and costs upwards 30% performance (more realistically 10%) in L3 Cache Layout aware applications. Example on a 4-CCX CPU (1950X /w 8 Cores and no SMT): EPYC-IBPB AMD In windows, coreinfo reports correctly: Unified Cache 1, Level 3,8 MB, Assoc 16, LineSize 64 Unified Cache 6, Level 3,8 MB, Assoc 16, LineSize 64 On a 3-CCX CPU (3960X /w 6 cores and no SMT): EPYC-IBPB AMD in windows, coreinfo reports incorrectly: -- Unified Cache 1, Level 3,8 MB, Assoc 16, LineSize 64 ** Unified Cache 6, Level 3,8 MB, Assoc 16, LineSize 64 Validated against 3.0, 3.1, 4.1 and 4.2 versions of qemu-kvm. With newer Qemu there is a fix (that does behave correctly) in using the dies parameter: The problem is that the dies are exposed differently tha
[Bug 1856335] Re: Cache Layout wrong on many Zen Arch CPUs
h-sieger, that is a misunderstanding, read my comment carefully again: "A workaround for Linux VMs is to disable CPUs (and setting their number/pinnings accordingly, e.g. every 4th (and 3rd for 3100) core is going to be 'dummy' and disabled system-wide) by e.g. echo 0 > /sys/devices/system/cpu/cpu3/online No good workaround for Windows VMs exists, as far as I know - the best you can do is setting affinity to specific process(es) and avoid the 'dummy' CPUs, but I am not aware of any possibility to disable specific CPUs (only limiting the overall number)." I do NOT have a fix - only a very ugly workaround for Linux guests only - I cannot fix the cache layout, but on Linux, I can get around that by adding dummy CPUs that I then disable in the guest during startup, so they are not used - effectively making sure that only the correct 6 vCPUs / 3 cores are used. On Windows, you cannot do that, AFAIK. -- You received this bug notification because you are a member of qemu- devel-ml, which is subscribed to QEMU. https://bugs.launchpad.net/bugs/1856335 Title: Cache Layout wrong on many Zen Arch CPUs Status in QEMU: New Bug description: AMD CPUs have L3 cache per 2, 3 or 4 cores. Currently, TOPOEXT seems to always map Cache ass if it was an 4-Core per CCX CPU, which is incorrect, and costs upwards 30% performance (more realistically 10%) in L3 Cache Layout aware applications. Example on a 4-CCX CPU (1950X /w 8 Cores and no SMT): EPYC-IBPB AMD In windows, coreinfo reports correctly: Unified Cache 1, Level 3,8 MB, Assoc 16, LineSize 64 Unified Cache 6, Level 3,8 MB, Assoc 16, LineSize 64 On a 3-CCX CPU (3960X /w 6 cores and no SMT): EPYC-IBPB AMD in windows, coreinfo reports incorrectly: -- Unified Cache 1, Level 3,8 MB, Assoc 16, LineSize 64 ** Unified Cache 6, Level 3,8 MB, Assoc 16, LineSize 64 Validated against 3.0, 3.1, 4.1 and 4.2 versions of qemu-kvm. With newer Qemu there is a fix (that does behave correctly) in using the dies parameter: The problem is that the dies are exposed differently than how AMD does it natively, they are exposed to Windows as sockets, which means, that if you are nto a business user, you can't ever have a machine with more than two CCX (6 cores) as consumer versions of Windows only supports two sockets. (Should this be reported as a separate bug?) To manage notifications about this bug go to: https://bugs.launchpad.net/qemu/+bug/1856335/+subscriptions
[Bug 1856335] Re: Cache Layout wrong on many Zen Arch CPUs
adds "host-cache-info=on,l3-cache=off" to the qemu -cpu args I believe l3-cache=off is useless with host-cache-info=on So should do what you want. -- You received this bug notification because you are a member of qemu- devel-ml, which is subscribed to QEMU. https://bugs.launchpad.net/bugs/1856335 Title: Cache Layout wrong on many Zen Arch CPUs Status in QEMU: New Bug description: AMD CPUs have L3 cache per 2, 3 or 4 cores. Currently, TOPOEXT seems to always map Cache ass if it was an 4-Core per CCX CPU, which is incorrect, and costs upwards 30% performance (more realistically 10%) in L3 Cache Layout aware applications. Example on a 4-CCX CPU (1950X /w 8 Cores and no SMT): EPYC-IBPB AMD In windows, coreinfo reports correctly: Unified Cache 1, Level 3,8 MB, Assoc 16, LineSize 64 Unified Cache 6, Level 3,8 MB, Assoc 16, LineSize 64 On a 3-CCX CPU (3960X /w 6 cores and no SMT): EPYC-IBPB AMD in windows, coreinfo reports incorrectly: -- Unified Cache 1, Level 3,8 MB, Assoc 16, LineSize 64 ** Unified Cache 6, Level 3,8 MB, Assoc 16, LineSize 64 Validated against 3.0, 3.1, 4.1 and 4.2 versions of qemu-kvm. With newer Qemu there is a fix (that does behave correctly) in using the dies parameter: The problem is that the dies are exposed differently than how AMD does it natively, they are exposed to Windows as sockets, which means, that if you are nto a business user, you can't ever have a machine with more than two CCX (6 cores) as consumer versions of Windows only supports two sockets. (Should this be reported as a separate bug?) To manage notifications about this bug go to: https://bugs.launchpad.net/qemu/+bug/1856335/+subscriptions
[Bug 1856335] Re: Cache Layout wrong on many Zen Arch CPUs
Damir: Hm, must be some misconfiguration, then. My config for Linux VMs to utilize 3 out of the 4 CCXs. Important parts of the libvirt domain XML: 24 1 hvm /usr/share/ovmf/x64/OVMF_CODE.fd /var/lib/libvirt/qemu/nvram/ccxtest-clone_VARS.fd . . . The CPUs with cpuset="0,12" are disabled once booted. The host-cache- info=on is the part that makes sure that the cache config is passed to the VM (but unfortunately does not take disabled cores into account, which results in incorrect config). The qemu:commandline is added because I need to add -amd-stibp, otherwise I wouldn't be able to boot. This overrides most parts in the XML part. -- You received this bug notification because you are a member of qemu- devel-ml, which is subscribed to QEMU. https://bugs.launchpad.net/bugs/1856335 Title: Cache Layout wrong on many Zen Arch CPUs Status in QEMU: New Bug description: AMD CPUs have L3 cache per 2, 3 or 4 cores. Currently, TOPOEXT seems to always map Cache ass if it was an 4-Core per CCX CPU, which is incorrect, and costs upwards 30% performance (more realistically 10%) in L3 Cache Layout aware applications. Example on a 4-CCX CPU (1950X /w 8 Cores and no SMT): EPYC-IBPB AMD In windows, coreinfo reports correctly: Unified Cache 1, Level 3,8 MB, Assoc 16, LineSize 64 Unified Cache 6, Level 3,8 MB, Assoc 16, LineSize 64 On a 3-CCX CPU (3960X /w 6 cores and no SMT): EPYC-IBPB AMD in windows, coreinfo reports incorrectly: -- Unified Cache 1, Level 3,8 MB, Assoc 16, LineSize 64 ** Unified Cache 6, Level 3,8 MB, Assoc 16, LineSize 64 Validated against 3.0, 3.1, 4.1 and 4.2 versions of qemu-kvm. With newer Qemu there is a fix (that does behave correctly) in using the dies parameter: The problem is that the dies are exposed differently than how AMD does it natively, they are exposed to Windows as sockets, which means, that if you are nto a business user, you can't ever have a machine with more than two CCX (6 cores) as consumer versions of Windows only supports two sockets. (Should this be reported as a separate bug?) To manage notifications about this bug go to: https://bugs.launchpad.net/qemu/+bug/1856335/+subscriptions
[Bug 1856335] Re: Cache Layout wrong on many Zen Arch CPUs
No, creating artificial NUMA nodes is, simply put, never a good solution for CPUs that operate as a single NUMA node - which is the case for all Zen2 CPUs (except maybe EPYCs? not sure about those). You may workaround the L3 issue that way, but hit many new bugs/problems by introducing multiple NUMA nodes, _especially_ on Windows VMs, because that OS has crappy NUMA handling and multitude of bugs related to it - which was one of the major reasons why even Zen2 Threadrippers are now single NUMA node (e.g. https://www.servethehome.com/wp- content/uploads/2019/11/AMD-Ryzen-Threadripper-3960X-Topology.png ). The host CPU architecture should be replicated as closely as possible on the VM and for Zen2 CPUs with 4 cores per CCX, _this already works perfectly_ - there are no problems on 3300X/3700(X)/3800X/3950X/3970X/3990X. There is, unfortunately, no way to customize/specify the "disabled" CPU cores in QEMU, and therefore no way to emulate 1 NUMA node + L3 cache per 2/3 cores - only to passthrough the cache config from host, which is unfortunately not done correctly for CPUs with disabled cores (but again, works perfectly for CPUs with all 4 cores enabled per CCX). lscpu: Architecture:x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian Address sizes: 43 bits physical, 48 bits virtual CPU(s): 24 On-line CPU(s) list: 0-23 Thread(s) per core: 2 Core(s) per socket: 12 Socket(s): 1 NUMA node(s):1 Vendor ID: AuthenticAMD CPU family: 23 Model: 113 Model name: AMD Ryzen 9 3900X 12-Core Processor Stepping:0 Frequency boost: enabled CPU MHz: 2972.127 CPU max MHz: 3800. CPU min MHz: 2200. BogoMIPS:7602.55 Virtualization: AMD-V L1d cache: 384 KiB L1i cache: 384 KiB L2 cache:6 MiB L3 cache:64 MiB NUMA node0 CPU(s): 0-23 Vulnerability Itlb multihit: Not affected Vulnerability L1tf: Not affected Vulnerability Mds: Not affected Vulnerability Meltdown: Not affected Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp Vulnerability Spectre v1:Mitigation; usercopy/swapgs barriers and __user pointer sanitization Vulnerability Spectre v2:Mitigation; Full AMD retpoline, IBPB conditional, STIBP conditional, RSB filling Vulnerability Tsx async abort: Not affected Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonsto p_tsc cpuid extd_apicid aperfmperf pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a mi salignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate sme ssbd mba sev ibpb stibp vmmcall fsgsbase b mi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif umip rdpid overflow_recov succor smca But the important thing has already been posted here in previous comments - notice the skipped core ids belonging to the disabled cores: virsh capabilities | grep "cpu id": -- You received this bug notification because you are a member of qemu- devel-ml, which is subscribed to QEMU. https://bugs.launchpad.net/bugs/1856335 Title: Cache Layout wrong on many Zen Arch CPUs Status in QEMU: New Bug description: AMD CPUs have L3 cache per 2, 3 or 4 cores. Currently, TOPOEXT seems to always map Cache ass if it was an 4-Core per CCX CPU, which is incorrect, and costs upwards 30% performance (more realistically 10%) in L3 Cache Layout aware applications. Example on a 4-CCX CPU (1950X /w 8 Cores and no SMT): EPYC-IBPB AMD In windows, coreinfo reports correctly: Unified Cache 1, Level 3,8 MB, Assoc 16, LineSize 64 Unified Cache 6, Level 3,8 MB, Assoc 16, LineSize 64 On a 3-CCX CPU (3960X /w 6 cores and no SMT): EPYC-IBPB AMD in windows, coreinfo reports
[Bug 1856335] Re: Cache Layout wrong on many Zen Arch CPUs
A workaround for Linux VMs is to disable CPUs (and setting their number/pinnings accordingly, e.g. every 4th (and 3rd for 3100) core is going to be 'dummy' and disabled system-wide) by e.g. echo 0 > /sys/devices/system/cpu/cpu3/online No good workaround for Windows VMs exists, as far as I know - the best you can do is setting affinity to specific process(es) and avoid the 'dummy' CPUs, but I am not aware of any possibility to disable specific CPUs (only limiting the overall number). -- You received this bug notification because you are a member of qemu- devel-ml, which is subscribed to QEMU. https://bugs.launchpad.net/bugs/1856335 Title: Cache Layout wrong on many Zen Arch CPUs Status in QEMU: New Bug description: AMD CPUs have L3 cache per 2, 3 or 4 cores. Currently, TOPOEXT seems to always map Cache ass if it was an 4-Core per CCX CPU, which is incorrect, and costs upwards 30% performance (more realistically 10%) in L3 Cache Layout aware applications. Example on a 4-CCX CPU (1950X /w 8 Cores and no SMT): EPYC-IBPB AMD In windows, coreinfo reports correctly: Unified Cache 1, Level 3,8 MB, Assoc 16, LineSize 64 Unified Cache 6, Level 3,8 MB, Assoc 16, LineSize 64 On a 3-CCX CPU (3960X /w 6 cores and no SMT): EPYC-IBPB AMD in windows, coreinfo reports incorrectly: -- Unified Cache 1, Level 3,8 MB, Assoc 16, LineSize 64 ** Unified Cache 6, Level 3,8 MB, Assoc 16, LineSize 64 Validated against 3.0, 3.1, 4.1 and 4.2 versions of qemu-kvm. With newer Qemu there is a fix (that does behave correctly) in using the dies parameter: The problem is that the dies are exposed differently than how AMD does it natively, they are exposed to Windows as sockets, which means, that if you are nto a business user, you can't ever have a machine with more than two CCX (6 cores) as consumer versions of Windows only supports two sockets. (Should this be reported as a separate bug?) To manage notifications about this bug go to: https://bugs.launchpad.net/qemu/+bug/1856335/+subscriptions
[Bug 1856335] Re: Cache Layout wrong on many Zen Arch CPUs
The problem is that disabled cores are not taken into account.. ALL Zen2 CPUs have L3 cache group per CCX and every CCX has 4 cores, the problem is that some cores in each CCX (1 for 6 and 12-core CPUs, 2 for 3100) are disabled for some models, but they still use their core ids (as can be seen in virsh capabilities | grep "cpu id" output in above comments). Looking at target/i386/cpu.c:5529, this is not taken into account. Maybe the cleanest way to fix this is to emulate the host topology by also skipping disabled core ids in the VM? That way, die offset will actually match the real host CPU topology... -- You received this bug notification because you are a member of qemu- devel-ml, which is subscribed to QEMU. https://bugs.launchpad.net/bugs/1856335 Title: Cache Layout wrong on many Zen Arch CPUs Status in QEMU: New Bug description: AMD CPUs have L3 cache per 2, 3 or 4 cores. Currently, TOPOEXT seems to always map Cache ass if it was an 4-Core per CCX CPU, which is incorrect, and costs upwards 30% performance (more realistically 10%) in L3 Cache Layout aware applications. Example on a 4-CCX CPU (1950X /w 8 Cores and no SMT): EPYC-IBPB AMD In windows, coreinfo reports correctly: Unified Cache 1, Level 3,8 MB, Assoc 16, LineSize 64 Unified Cache 6, Level 3,8 MB, Assoc 16, LineSize 64 On a 3-CCX CPU (3960X /w 6 cores and no SMT): EPYC-IBPB AMD in windows, coreinfo reports incorrectly: -- Unified Cache 1, Level 3,8 MB, Assoc 16, LineSize 64 ** Unified Cache 6, Level 3,8 MB, Assoc 16, LineSize 64 Validated against 3.0, 3.1, 4.1 and 4.2 versions of qemu-kvm. With newer Qemu there is a fix (that does behave correctly) in using the dies parameter: The problem is that the dies are exposed differently than how AMD does it natively, they are exposed to Windows as sockets, which means, that if you are nto a business user, you can't ever have a machine with more than two CCX (6 cores) as consumer versions of Windows only supports two sockets. (Should this be reported as a separate bug?) To manage notifications about this bug go to: https://bugs.launchpad.net/qemu/+bug/1856335/+subscriptions
[Bug 1856335] Re: Cache Layout wrong on many Zen Arch CPUs
Same problem here on 5.0 and 3900x (3 cores per CCX). And as stated before - declaring NUMA nodes is definitely not the right solution if the aim is to emulate the host CPU as close as possible. -- You received this bug notification because you are a member of qemu- devel-ml, which is subscribed to QEMU. https://bugs.launchpad.net/bugs/1856335 Title: Cache Layout wrong on many Zen Arch CPUs Status in QEMU: New Bug description: AMD CPUs have L3 cache per 2, 3 or 4 cores. Currently, TOPOEXT seems to always map Cache ass if it was an 4-Core per CCX CPU, which is incorrect, and costs upwards 30% performance (more realistically 10%) in L3 Cache Layout aware applications. Example on a 4-CCX CPU (1950X /w 8 Cores and no SMT): EPYC-IBPB AMD In windows, coreinfo reports correctly: Unified Cache 1, Level 3,8 MB, Assoc 16, LineSize 64 Unified Cache 6, Level 3,8 MB, Assoc 16, LineSize 64 On a 3-CCX CPU (3960X /w 6 cores and no SMT): EPYC-IBPB AMD in windows, coreinfo reports incorrectly: -- Unified Cache 1, Level 3,8 MB, Assoc 16, LineSize 64 ** Unified Cache 6, Level 3,8 MB, Assoc 16, LineSize 64 Validated against 3.0, 3.1, 4.1 and 4.2 versions of qemu-kvm. With newer Qemu there is a fix (that does behave correctly) in using the dies parameter: The problem is that the dies are exposed differently than how AMD does it natively, they are exposed to Windows as sockets, which means, that if you are nto a business user, you can't ever have a machine with more than two CCX (6 cores) as consumer versions of Windows only supports two sockets. (Should this be reported as a separate bug?) To manage notifications about this bug go to: https://bugs.launchpad.net/qemu/+bug/1856335/+subscriptions