Hi Xiaoyao,
Sure, I can share how I reproduce this issue. 
1. First I have modified hmp_info_registers
diff --git a/monitor/hmp-cmds-target.c b/monitor/hmp-cmds-target.c
index 8eaf70d9c9..a4bb3d715b 100644
--- a/monitor/hmp-cmds-target.c
+++ b/monitor/hmp-cmds-target.c
@@ -102,7 +102,7 @@ void hmp_info_registers(Monitor *mon, const QDict *qdict)
     if (all_cpus) {
         CPU_FOREACH(cs) {
             monitor_printf(mon, "\nCPU#%d\n", cs->cpu_index);
-            cpu_dump_state(cs, NULL, CPU_DUMP_FPU);
+            cpu_dump_state(cs, NULL, CPU_DUMP_CODE);
         }
     } else {
         cs = vcpu >= 0 ? qemu_get_cpu(vcpu) : mon_get_cpu(mon);
@@ -117,7 +117,7 @@ void hmp_info_registers(Monitor *mon, const QDict *qdict)
         }
 
         monitor_printf(mon, "\nCPU#%d\n", cs->cpu_index);
-        cpu_dump_state(cs, NULL, CPU_DUMP_FPU);
+        cpu_dump_state(cs, NULL, CPU_DUMP_CODE);
     }
 }

2. Run this in cmd line:
# yes "info registers" | sudo ./qemu-system-x86_64 -accel kvm -monitor stdio 
-global driver=cfi.pflash01,property=secure,value=on -blockdev "{'driver': 
'file', 'filename': '/usr/share/OVMF/OVMF_CODE_4M.secboot.fd', 'node-name': 
'ovmf-code', 'read-only': true}" -blockdev "{'driver': 'file', 'filename': 
'/usr/share/OVMF/OVMF_VARS_4M.fd', 'node-name': 'ovmf-vars', 'read-only': 
true}" -machine q35,smm=on,pflash0=ovmf-code,pflash1=ovmf-vars -m 2G -nodefaults


Assert should be reproduced within 10-15 seconds.
Not sure if it is important detail or not, however I run this qemu cmd inside 
qemu-based virual machine with enabled nested virtualization.


> On 29 Jul 2025, at 09:01, Xiaoyao Li <xiaoyao...@intel.com> wrote:
> 
> On 7/29/2025 12:19 AM, Zhao Liu wrote:
>> Hi Kirill,
>> On Mon, Jul 28, 2025 at 05:44:25PM +0300, Kirill Martynov wrote:
>>> Date: Mon, 28 Jul 2025 17:44:25 +0300
>>> From: Kirill Martynov <stdcalll...@yandex-team.ru>
>>> Subject: Re: [PATCH] x86/cpu: Handle SMM mode in x86_cpu_dump_state for
>>>  softmmu
>>> X-Mailer: Apple Mail (2.3826.600.51.1.1)
>>> 
>>> Hi Xiaoyao!
>>> Hi Zhao!
>>> 
>>> Xiaoyao,
>>> I tested the patch you provided, it works smoothly, easy to apply. Nothing 
>>> to complain about.
>>> 
>>> Zhao,
>>> I also tried your approach (extend cpu_address_space_init with AddressSpace 
>>> parameter)
>>> First, it crashed in malloc with error:
>>> malloc(): unaligned tcache chunk detected
>>> After a little investigation I resized cpu->cpu_ases array, so it can fit 
>>> second element and
>>> it started working. However, it looks like that function 
>>> cpu_address_space_destroy needs
>>> some adjustment, because now it treats cpu->cpu_ases elements as 
>>> dynamically allocated and
>>> destroys them with g_free() and passing &smram_address_space to 
>>> cpu_address_space_init()
>>> in register_smram_listener() could lead to a problem since it is statically 
>>> allocated in binary.
>> Thanks for testing. Yes, resize related details are needed, which were
>> I missed. These 2 patches essentially are all about adding SMM CPU
>> address space for KVM, like TCG did.
>>> So, my question now, what should I do?
> 
> I just sent the formal version [*], could you please help verify if it 
> resolve your problem?
> 
> (If you can share the step how to reproduce the original problem, I can test 
> myself)
> 
> [*] https://lore.kernel.org/all/20250729054023.1668443-2-xiaoyao...@intel.com/
> 
>> I still believe we should update cpu_address_space_init() and remove its
>> outdated assumptions about KVM first.
>> Moreover, users should have control over the added address spaces (I
>> think this is why num_ases should be set before
>> cpu_address_space_init()), and quietly updating num_ases is not a good
>> idea.
>> The question of whether to reuse smram_address_space for the CPU is
>> flexible. At least TCG doesn't reuse the same SMM space, and there's
>> already cpu_as_root (and cpu_as_mem!) in X86CPU. 
> 
> For i386 tcg, it allocates each CPU 3 MemoryRegions: cpu_as_root, cpu_as_mem 
> and smram for SMM. While for i386 kvm, it allocates global MemoryRegions: 
> smram_as_root and smram_as_mem and get smram from resolving "/machine/smram".
> 
> yeah, this seems something we can cleanup if there were not specific reason 
> for TCG to have different MemoryRegion each CPU. I don't have bandwidth to 
> investigate it further.
> 
>> There are also some
>> cleanup things worth considering, such as how to better handle the TCG
>> memory listener in cpu_address_space_init() - KVM also has the similar
>> logic. If possible, I can help you further refine this fix and clean up
>> other related stuff in one goes as well.
>> Thanks,
>> Zhao

Reply via email to