Hello Jirka

I don't think there's a bug here.

physical_package_id don't have to be between 0 and N-1, they just have
to be different to identify packages and cores between packages. Having
other values is uncommon on x86 but quite common on POWER at least.

core_id is even worse. They are basically not used at all fortunately.
They are often the same in both sockets. They are often discontigous
inside sockets (maybe because CPU vendors disable specific cores in the
middle of the CPU when your CPU doesn't have the max number of cores).
On a dual-socket 20-core Xeon (Cascade Lake), both sockets have these
core_ids: 0,4,1,3,2,12,8,11,9,10,16,20,17,19,18,28,24,27,25,26 (5-7,
13-15 and 21-23 are missing).

PU and NUMA nodes often have contigous OS indexes, but not necessarily
in order either.

FWIW, I get the same values as yours on a Gigabyte platform with 2x
ThunderX2 running RHEL7 4.14 kernel.

Brice



Le 06/09/2019 à 15:29, Jiri Hladky a écrit :
> Hi all! 
>
> We are seeing strange CPU topology/numbering on a dual-socket ARM
> server with 2×ThunderX2 CN9975 CPU [0].
>
> Package IDs:
> 36 and 3180
> cd /sys/devices/system/cpu
> $ cat cpu0/topology/physical_package_id
> 36
> Expected values: 0 and 1
>
> Core IDs on the second socket:
> 256-283
> $ cat cpu112/topology/core_id
> 256
> $ cat cpu223/topology/core_id
> 283
>
> Expected values for the second socket:
> 28 - 55
>
> (On the first socket, the core numbering is OK - 0-27)
>
> I assume this is Linux kernel bug. Have you seen anything like this in
> the past? What might a root cause? Linux kernel bug or perhaps a BIOS
> issue? 
>
> We see it on 5.3.0-0.rc7 and 4.18 kernels. I'm attaching lstopo and
> gather-topology output. I would appreciate any feedback on that. 
>
> Thank you!
> Jirka
>
>
> [0]
> https://en.wikichip.org/wiki/cavium/thunderx2
>
>
> _______________________________________________
> hwloc-devel mailing list
> hwloc-devel@lists.open-mpi.org
> https://lists.open-mpi.org/mailman/listinfo/hwloc-devel
_______________________________________________
hwloc-devel mailing list
hwloc-devel@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/hwloc-devel

Reply via email to