Here is the output of lstopo *$* lstopo -p
Machine (63GB) Package P#0 + L3 (16MB) L2 (4096KB) + L1d (32KB) + L1i (32KB) + Core P#0 + PU P#0 L2 (4096KB) + L1d (32KB) + L1i (32KB) + Core P#1 + PU P#1 L2 (4096KB) + L1d (32KB) + L1i (32KB) + Core P#2 + PU P#2 L2 (4096KB) + L1d (32KB) + L1i (32KB) + Core P#3 + PU P#3 Package P#1 + L3 (16MB) L2 (4096KB) + L1d (32KB) + L1i (32KB) + Core P#0 + PU P#4 L2 (4096KB) + L1d (32KB) + L1i (32KB) + Core P#1 + PU P#5 L2 (4096KB) + L1d (32KB) + L1i (32KB) + Core P#2 + PU P#6 L2 (4096KB) + L1d (32KB) + L1i (32KB) + Core P#3 + PU P#7 Package P#2 + L3 (16MB) L2 (4096KB) + L1d (32KB) + L1i (32KB) + Core P#0 + PU P#8 L2 (4096KB) + L1d (32KB) + L1i (32KB) + Core P#1 + PU P#9 L2 (4096KB) + L1d (32KB) + L1i (32KB) + Core P#2 + PU P#10 L2 (4096KB) + L1d (32KB) + L1i (32KB) + Core P#3 + PU P#11 Package P#3 + L3 (16MB) L2 (4096KB) + L1d (32KB) + L1i (32KB) + Core P#0 + PU P#12 L2 (4096KB) + L1d (32KB) + L1i (32KB) + Core P#1 + PU P#13 L2 (4096KB) + L1d (32KB) + L1i (32KB) + Core P#2 + PU P#14 L2 (4096KB) + L1d (32KB) + L1i (32KB) + Core P#3 + PU P#15 HostBridge P#0 PCI 8086:7010 Block(Removable Media Device) "sr0" PCI 1234:1111 GPU "card0" GPU "controlD64" PCI 1af4:1004 PCI 1af4:1000 *Michael Tie *Technical Director Mathematics, Statistics, and Computer Science One North College Street phn: 507-222-4067 Northfield, MN 55057 cel: 952-212-8933 m...@carleton.edu fax: 507-222-4312 On Tue, Mar 10, 2020 at 12:21 AM Chris Samuel <ch...@csamuel.org> wrote: > On 9/3/20 7:44 am, mike tie wrote: > > > Specifically, how is slurmd -C getting that info? Maybe this is a > > kernel issue, but other than lscpu and /proc/cpuinfo, I don't know where > > to look. Maybe I should be looking at the slurmd source? > > It would be worth looking at what something like "lstopo" from the hwloc > package says about your VM. > > All the best, > Chris > -- > Chris Samuel : http://www.csamuel.org/ : Berkeley, CA, USA > >