Correction: I have E5-2620v4, which is 8-core Broadwell.
Please excuse my error earlier.
On Wed, Aug 02, 2023 at 01:23:18PM +, Max R. Dechantsreiter wrote:
> Hi Brice,
>
> Well, the VPS gives me a 4-core slice of an Intel(R) Xeon(R)
> CPU E5-2620 node, which is Sandy Bridge EP, with 6
Hi Brice,
Setting HWLOC_ALLOW=all made hwloc usable on my oddly-configured VPS:
./lstopo
Machine (4096MB total) + Package L#0
NUMANode L#0 (P#0 4096MB)
L3 L#0 (20MB)
L2 L#0 (256KB) + L1d L#0 (32KB) + L1i L#0 (32KB) + Core L#0 + PU L#0 (P#0)
L2 L#1 (256KB) + L1d L#1 (32KB) + L1i L#1
The cgroup information under /sys/fs/cgroup/ should be fixed.
cpuset.cpus should contain 0-3 and cpuset.mems should contain 0. In the
meantime, hwloc may ignore this cgroup info if you set HWLOC_ALLOW=all
in the environment.
The x86 CPUID information is also wrong on this machine. All 4 cores
Hi Brice,
Well, the VPS gives me a 4-core slice of an Intel(R) Xeon(R)
CPU E5-2620 node, which is Sandy Bridge EP, with 6 physical
cores, so probably 12 cores on the node. The numbering does
seem wacky: it seems to describe a node with 2 8-core CPUs.
This is the VPS on which I host my Web site;
Hello
There's something wrong in this machine. It exposes 4 cores (number 0 to
3) and no NUMA node, but says the only allowed resources are cores
8-15,24-31 and NUMA node 1. That's why hwloc says the topology is empty
(running lstopo --disallowed shows NUMA 0 and cores 0-3 in red, which
Hello,
On my VPS I tested my build of hwloc-2.9.2 by running lstopo:
./lstopo
hwloc: Topology became empty, aborting!
Segmentation fault
On a GCP n1-standard-2 a similar build (GCC 12.2 vs. 13.2) seemed to work:
./lstopo
hwloc/nvml: Failed to initialize with nvmlInit(): Driver Not Loaded