On Sun, May 16, 2021 at 06:28:59PM +0800, Yanan Wang wrote: > From: Andrew Jones <drjo...@redhat.com> > > Add the Processor Properties Topology Table (PPTT) to expose > CPU topology information defined by users to ACPI guests. > > Note, a DT-boot Linux guest with a non-flat CPU topology will > see socket and core IDs being sequential integers starting > from zero, which is different from ACPI-boot Linux guest, > e.g. with -smp 4,sockets=2,cores=2,threads=1 > > a DT boot produces: > > cpu: 0 package_id: 0 core_id: 0 > cpu: 1 package_id: 0 core_id: 1 > cpu: 2 package_id: 1 core_id: 0 > cpu: 3 package_id: 1 core_id: 1 > > an ACPI boot produces: > > cpu: 0 package_id: 36 core_id: 0 > cpu: 1 package_id: 36 core_id: 1 > cpu: 2 package_id: 96 core_id: 2 > cpu: 3 package_id: 96 core_id: 3 > > This is due to several reasons: > > 1) DT cpu nodes do not have an equivalent field to what the PPTT > ACPI Processor ID must be, i.e. something equal to the MADT CPU > UID or equal to the UID of an ACPI processor container. In both > ACPI cases those are platform dependant IDs assigned by the > vendor. > > 2) While QEMU is the vendor for a guest, if the topology specifies > SMT (> 1 thread), then, with ACPI, it is impossible to assign a > core-id the same value as a package-id, thus it is not possible > to have package-id=0 and core-id=0. This is because package and > core containers must be in the same ACPI namespace and therefore > must have unique UIDs. > > 3) ACPI processor containers are not mandatorily required for PPTT > tables to be used and, due to the limitations of which IDs are > selected described above in (2), they are not helpful for QEMU, > so we don't build them with this patch. In the absence of them, > Linux assigns its own unique IDs. The maintainers have chosen not > to use counters from zero, but rather ACPI table offsets, which > explains why the numbers are so much larger than with DT. > > 4) When there is no SMT (threads=1) the core IDs for ACPI boot guests > match the logical CPU IDs, because these IDs must be equal to the > MADT CPU UID (as no processor containers are present), and QEMU > uses the logical CPU ID for these MADT IDs. > > So in summary, with QEMU as vender for guest, we use sequential integers > starting from zero for non-leaf nodes without valid ID flag, so that the > guest will ignore them and use table offsets as the unique IDs. And we > also use logical CPU IDs for leaf nodes to be consistent with MADT. > > Signed-off-by: Andrew Jones <drjo...@redhat.com> > Co-developed-by: Yanan Wang <wangyana...@huawei.com> > Signed-off-by: Yanan Wang <wangyana...@huawei.com> > --- > hw/arm/virt-acpi-build.c | 58 +++++++++++++++++++++++++++++++++++++++- > 1 file changed, 57 insertions(+), 1 deletion(-)
Why aren't we adding build_pptt to aml-build.c, like my original patch does? I don't see anything Arm specific below, at least not if you passed MachineState instead of VirtMachineState, like my original patch did. > > diff --git a/hw/arm/virt-acpi-build.c b/hw/arm/virt-acpi-build.c > index 4d64aeb865..b03d57745a 100644 > --- a/hw/arm/virt-acpi-build.c > +++ b/hw/arm/virt-acpi-build.c > @@ -435,6 +435,57 @@ build_srat(GArray *table_data, BIOSLinker *linker, > VirtMachineState *vms) > vms->oem_table_id); > } > > +/* ACPI 6.2: 5.2.29 Processor Properties Topology Table (PPTT) */ > +static void build_pptt(GArray *table_data, BIOSLinker *linker, > + VirtMachineState *vms) > +{ > + MachineState *ms = MACHINE(vms); > + int pptt_start = table_data->len; > + int uid = 0, socket; > + > + acpi_data_push(table_data, sizeof(AcpiTableHeader)); > + > + for (socket = 0; socket < ms->smp.sockets; socket++) { > + uint32_t socket_offset = table_data->len - pptt_start; > + int core; > + > + build_processor_hierarchy_node( > + table_data, > + (1 << 0), /* ACPI 6.2 - Physical package */ > + 0, socket, NULL, 0); > + > + for (core = 0; core < ms->smp.cores; core++) { > + uint32_t core_offset = table_data->len - pptt_start; > + int thread; > + > + if (ms->smp.threads <= 1) { We can't have threads < 1, so this condition should be == 1. > + build_processor_hierarchy_node( > + table_data, > + (1 << 1) | /* ACPI 6.2 - ACPI Processor ID valid */ > + (1 << 3), /* ACPI 6.3 - Node is a Leaf */ > + socket_offset, uid++, NULL, 0); > + } else { > + build_processor_hierarchy_node(table_data, 0, socket_offset, > + core, NULL, 0); > + > + for (thread = 0; thread < ms->smp.threads; thread++) { > + build_processor_hierarchy_node( > + table_data, > + (1 << 1) | /* ACPI 6.2 - ACPI Processor ID valid */ > + (1 << 2) | /* ACPI 6.3 - Processor is a Thread */ > + (1 << 3), /* ACPI 6.3 - Node is a Leaf */ > + core_offset, uid++, NULL, 0); > + } > + } > + } > + } > + > + build_header(linker, table_data, > + (void *)(table_data->data + pptt_start), "PPTT", > + table_data->len - pptt_start, 2, > + vms->oem_id, vms->oem_table_id); > +} > + > /* GTDT */ > static void > build_gtdt(GArray *table_data, BIOSLinker *linker, VirtMachineState *vms) > @@ -719,13 +770,18 @@ void virt_acpi_build(VirtMachineState *vms, > AcpiBuildTables *tables) > dsdt = tables_blob->len; > build_dsdt(tables_blob, tables->linker, vms); > > - /* FADT MADT GTDT MCFG SPCR pointed to by RSDT */ > + /* FADT MADT PPTT GTDT MCFG SPCR pointed to by RSDT */ > acpi_add_table(table_offsets, tables_blob); > build_fadt_rev5(tables_blob, tables->linker, vms, dsdt); > > acpi_add_table(table_offsets, tables_blob); > build_madt(tables_blob, tables->linker, vms); > > + if (!vmc->no_cpu_topology) { > + acpi_add_table(table_offsets, tables_blob); > + build_pptt(tables_blob, tables->linker, vms); > + } > + > acpi_add_table(table_offsets, tables_blob); > build_gtdt(tables_blob, tables->linker, vms); > > -- > 2.19.1 > Thanks, drew