On Wed, 18 Sep 2024 20:22:26 +0800 Chuang Xu <xuchuangxc...@bytedance.com> wrote:
> Hi, Igor: > > On 2024/9/18 下午8:02, Igor Mammedov wrote: > > On Sat, 14 Sep 2024 19:01:27 +0800 > > Chuang Xu <xuchuangxc...@bytedance.com> wrote: > > > >> When QEMU is started with: > >> -cpu host,migratable=on,host-cache-info=on,l3-cache=off > >> -smp 180,sockets=2,dies=1,cores=45,threads=2 > >> > >> Execute "cpuid -1 -l 1 -r" in guest, we'll get: > >> eax=0x000806f8 ebx=0x465a0800 ecx=0xfffaba1f edx=0x3fa9fbff > >> CPUID.01H.EBX[23:16] is 90, while the expected value is 128. > >> > >> Execute "cpuid -1 -l 4 -r" in guest, we'll get: > >> eax=0xfc004121 ebx=0x02c0003f ecx=0x0000003f edx=0x00000000 > >> CPUID.04H.EAX[31:26] is 63, which is as expected. > >> > >> As (1+CPUID.04H.EAX[31:26]) round up to the nearest power-of-2 integer, > >> we'd beter round up CPUID.01H.EBX[23:16] to the nearest power-of-2 > >> integer too. Otherwise we may encounter unexpected results in guest. > >> > >> For example, when QEMU is started with CLI above and xtopology is disabled, > >> guest kernel 5.15.120 uses CPUID.01H.EBX[23:16]/(1+CPUID.04H.EAX[31:26]) to > >> calculate threads-per-core in detect_ht(). Then guest will get > >> "90/(1+63)=1" > >> as the result, even though theads-per-core should actually be 2. > >> > >> So let us round up CPUID.01H.EBX[23:16] to the nearest power-of-2 integer > >> to solve the unexpected result. > >> > >> Signed-off-by: Guixiong Wei <weiguixi...@bytedance.com> > >> Signed-off-by: Yipeng Yin <yinyip...@bytedance.com> > >> Signed-off-by: Chuang Xu <xuchuangxc...@bytedance.com> > >> --- > >> target/i386/cpu.c | 8 +++++++- > >> 1 file changed, 7 insertions(+), 1 deletion(-) > >> > >> diff --git a/target/i386/cpu.c b/target/i386/cpu.c > >> index 4c2e6f3a71..24d60ead9e 100644 > >> --- a/target/i386/cpu.c > >> +++ b/target/i386/cpu.c > >> @@ -261,6 +261,12 @@ static uint32_t > >> max_thread_ids_for_cache(X86CPUTopoInfo *topo_info, > >> return num_ids - 1; > >> } > >> > >> +static uint32_t max_thread_number_in_package(X86CPUTopoInfo *topo_info) > >> +{ > >> + uint32_t num_threads = 1 << apicid_pkg_offset(topo_info); > >> + return num_threads; > >> +} > >> + > >> static uint32_t max_core_ids_in_package(X86CPUTopoInfo *topo_info) > >> { > >> uint32_t num_cores = 1 << (apicid_pkg_offset(topo_info) - > >> @@ -6417,7 +6423,7 @@ void cpu_x86_cpuid(CPUX86State *env, uint32_t index, > >> uint32_t count, > >> } > >> *edx = env->features[FEAT_1_EDX]; > >> if (threads_per_pkg > 1) { > >> - *ebx |= threads_per_pkg << 16; > >> + *ebx |= max_thread_number_in_package(&topo_info) << 16; > > why not use pow2ceil(threads_per_pkg) instead? > > I saw in the latest code that calculations of cpuids involving CPU topology > all use topo_info, > so in order to maintain consistency in code style, I also used topo_info for > calculation. and we end up with a zoo of ways different topo stuff is calculated. Given we already have threads_per_pkg calculated within the function, is cleaner/more self-documenting to reuse it with pow2ceil() instead of adding yet another helper with less than obvious '1 << apicid_pkg_offset(topo_info)' math. > > > >> *edx |= CPUID_HT; > >> } > >> if (!cpu->enable_pmu) >