Your message dated Thu, 18 Mar 2021 09:56:59 +0530
with message-id <87im5pccj0.fsf@bhrigu>
and subject line Re: debian-installer: Script exected in preseed/late_command
on dual CPU socket system sees only Single CPU socket
has caused the Debian Bug report #965263,
regarding debian-installer: Script exected in preseed/late_command on dual CPU
socket system sees only Single CPU socket
to be marked as done.
This means that you claim that the problem has been dealt with.
If this is not the case it is now your responsibility to reopen the
Bug report if necessary, and/or fix the problem forthwith.
(NB: If you are a system administrator and have no idea what this
message is talking about, this may indicate a serious mail system
misconfiguration somewhere. Please contact ow...@bugs.debian.org
immediately.)
--
965263: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=965263
Debian Bug Tracking System
Contact ow...@bugs.debian.org with problems
--- Begin Message ---
Package: debian-installer
Severity: normal
Dear Maintainer,
I recently moved some code to preseed/late_command script to do the CPU pinning
for host machine.
Script snippet looks something like below
NODE_CPUS=2
for node in /sys/devices/system/node/node[0-9]; do
cat $node/cpu[0-9]*/topology/thread_siblings_list | sort
-nu | sed 's/,/\n/' | head -n$NODE_CPUS
done
It captured a sibling core from only node0 of the system. So out of curiosity I
observed the logs we captured
on rsyslog server and found what I see below
Jul 14 09:45:18 10.33.97.110 in-target: ++ cat
/sys/devices/system/node/node0/cpu0/topology/thread_siblings_list
/sys/devices/system/node/node0/cpu10/topology/thread_siblings_list
/sys/devices/system/no
de/node0/cpu11/topology/thread_siblings_list
/sys/devices/system/node/node0/cpu12/topology/thread_siblings_list
/sys/devices/system/node/node0/cpu13/topology/thread_siblings_list
/sys/devices/system/nod
e/node0/cpu14/topology/thread_siblings_list
/sys/devices/system/node/node0/cpu15/topology/thread_siblings_list
/sys/devices/system/node/node0/cpu16/topology/thread_siblings_list
/sys/devices/system/node
/node0/cpu17/topology/thread_siblings_list
/sys/devices/system/node/node0/cpu18/topology/thread_siblings_list
/sys/devices/system/node/node0/cpu19/topology/thread_siblings_list
/sys/devices/system/node/
node0/cpu1/topology/thread_siblings_list
/sys/devices/system/node/node0/cpu20/topology/thread_siblings_list
/sys/devices/system/node/node0/cpu21/topology/thread_siblings_list
/sys/devices/system/node/no
de0/cpu22/topology/thr
Jul 14 09:45:18 10.33.97.110 in-target:
system/node/node0/cpu23/topology/thread_siblings_list
/sys/devices/system/node/node0/cpu24/topology/thread_siblings_list
/sys/devices/system/node/node0/cpu25/topo
logy/thread_siblings_list
/sys/devices/system/node/node0/cpu26/topology/thread_siblings_list
/sys/devices/system/node/node0/cpu27/topology/thread_siblings_list
/sys/devices/system/node/node0/cpu28/topol
ogy/thread_siblings_list
/sys/devices/system/node/node0/cpu29/topology/thread_siblings_list
/sys/devices/system/node/node0/cpu2/topology/thread_siblings_list
/sys/devices/system/node/node0/cpu30/topolog
y/thread_siblings_list
/sys/devices/system/node/node0/cpu31/topology/thread_siblings_list
/sys/devices/system/node/node0/cpu32/topology/thread_siblings_list
/sys/devices/system/node/node0/cpu33/topology
/thread_siblings_list
/sys/devices/system/node/node0/cpu34/topology/thread_siblings_list
/sys/devices/system/node/node0/cpu35/topology/thread_siblings_list
/sys/devices/system/node/node0/cpu36/topology/
thread_siblings_list /
Jul 14 09:45:18 10.33.97.110 in-target: pu37/topology/thread_siblings_list
/sys/devices/system/node/node0/cpu38/topology/thread_siblings_list
/sys/devices/system/node/node0/cpu39/topology/thread_sibling
s_list /sys/devices/system/node/node0/cpu3/topology/thread_siblings_list
/sys/devices/system/node/node0/cpu40/topology/thread_siblings_list
/sys/devices/system/node/node0/cpu41/topology/thread_siblings_
list /sys/devices/system/node/node0/cpu42/topology/thread_siblings_list
/sys/devices/system/node/node0/cpu43/topology/thread_siblings_list
/sys/devices/system/node/node0/cpu44/topology/thread_siblings_l
ist /sys/devices/system/node/node0/cpu45/topology/thread_siblings_list
/sys/devices/system/node/node0/cpu46/topology/thread_siblings_list
/sys/devices/system/node/node0/cpu47/topology/thread_siblings_li
st /sys/devices/system/node/node0/cpu48/topology/thread_siblings_list
/sys/devices/system/node/node0/cpu49/topology/thread_siblings_list
/sys/devices/system/node/node0/cpu4/topology/thread_siblings_list
/sys/devices/system/n
Jul 14 09:45:18 10.33.97.110 in-target: _siblings_list
/sys/devices/system/node/node0/cpu51/topology/thread_siblings_list
/sys/devices/system/node/node0/cpu52/topology/thread_siblings_list /sys/devices/
system/node/node0/cpu53/topology/thread_siblings_list
/sys/devices/system/node/node0/cpu54/topology/thread_siblings_list
/sys/devices/system/node/node0/cpu55/topology/thread_siblings_list
/sys/devices/s
ystem/node/node0/cpu56/topology/thread_siblings_list
/sys/devices/system/node/node0/cpu57/topology/thread_siblings_list
/sys/devices/system/node/node0/cpu58/topology/thread_siblings_list
/sys/devices/sy
stem/node/node0/cpu59/topology/thread_siblings_list
/sys/devices/system/node/node0/cpu5/topology/thread_siblings_list
/sys/devices/system/node/node0/cpu60/topology/thread_siblings_list
/sys/devices/syst
em/node/node0/cpu61/topology/thread_siblings_list
/sys/devices/system/node/node0/cpu62/topology/thread_siblings_list
/sys/devices/system/node/node0/cpu63/topology/thread_siblings_list
/sys/devices/syste
m/node/node0/cpu64/top
Jul 14 09:45:18 10.33.97.110 in-target:
/devices/system/node/node0/cpu65/topology/thread_siblings_list
/sys/devices/system/node/node0/cpu66/topology/thread_siblings_list
/sys/devices/system/node/node0/c
pu67/topology/thread_siblings_list
/sys/devices/system/node/node0/cpu68/topology/thread_siblings_list
/sys/devices/system/node/node0/cpu69/topology/thread_siblings_list
/sys/devices/system/node/node0/cp
u6/topology/thread_siblings_list
/sys/devices/system/node/node0/cpu70/topology/thread_siblings_list
/sys/devices/system/node/node0/cpu71/topology/thread_siblings_list
/sys/devices/system/node/node0/cpu7
2/topology/thread_siblings_list
/sys/devices/system/node/node0/cpu73/topology/thread_siblings_list
/sys/devices/system/node/node0/cpu74/topology/thread_siblings_list
/sys/devices/system/node/node0/cpu75
/topology/thread_siblings_list
/sys/devices/system/node/node0/cpu76/topology/thread_siblings_list
/sys/devices/system/node/node0/cpu77/topology/thread_siblings_list
/sys/devices/system/node/node0/cpu78/
topology/thread_siblin
Jul 14 09:45:18 10.33.97.110 in-target:
e/node0/cpu79/topology/thread_siblings_list
/sys/devices/system/node/node0/cpu7/topology/thread_siblings_list
/sys/devices/system/node/node0/cpu80/topology/thread
_siblings_list
/sys/devices/system/node/node0/cpu81/topology/thread_siblings_list
/sys/devices/system/node/node0/cpu82/topology/thread_siblings_list
/sys/devices/system/node/node0/cpu83/topology/thread_
siblings_list
/sys/devices/system/node/node0/cpu84/topology/thread_siblings_list
/sys/devices/system/node/node0/cpu85/topology/thread_siblings_list
/sys/devices/system/node/node0/cpu86/topology/thread_s
iblings_list /sys/devices/system/node/node0/cpu87/topology/thread_siblings_list
/sys/devices/system/node/node0/cpu88/topology/thread_siblings_list
/sys/devices/system/node/node0/cpu89/topology/thread_si
blings_list /sys/devices/system/node/node0/cpu8/topology/thread_siblings_list
/sys/devices/system/node/node0/cpu90/topology/thread_siblings_list
/sys/devices/system/node/node0/cpu91/topology/thread_sibl
ings_list /sys/devices
Jul 14 09:45:18 10.33.97.110 in-target: gy/thread_siblings_list
/sys/devices/system/node/node0/cpu93/topology/thread_siblings_list
/sys/devices/system/node/node0/cpu94/topology/thread_siblings_list /sys
/devices/system/node/node0/cpu95/topology/thread_siblings_list
/sys/devices/system/node/node0/cpu9/topology/thread_siblings_list
As you can see all core appears to be in node0 but once system is installed and
booted. In lscpu and numactl we see following output.
NUMA node0 CPU(s): 0-23,48-71
NUMA node1 CPU(s): 24-47,72-95
node 0 cpus: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 48
49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
node 1 cpus: 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45
46 47 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95
I'm bit surprised by this behavior as intended system all came up with single
core (2thread) instead of 2 core (4thread).
Is this expected behavior? Do I need to do any specific configuration to make
sure CPU is visible across nodes?.
Please let me know if you need more information.
Warm Regards,
Vasudev
-- System Information:
Debian Release: bullseye/sid
APT prefers unstable-debug
APT policy: (500, 'unstable-debug'), (500, 'unstable'), (1,
'experimental-debug'), (1, 'experimental')
Architecture: amd64 (x86_64)
Foreign Architectures: armhf, arm64
Kernel: Linux 5.6.0-2-amd64 (SMP w/8 CPU cores)
Kernel taint flags: TAINT_WARN, TAINT_OOT_MODULE, TAINT_UNSIGNED_MODULE
Locale: LANG=en_US.UTF-8, LC_CTYPE=en_US.UTF-8 (charmap=UTF-8), LANGUAGE=
(charmap=UTF-8)
Shell: /bin/sh linked to /bin/dash
Init: systemd (via /run/systemd/system)
LSM: AppArmor: enabled
--- End Message ---
--- Begin Message ---
Hi Again,
I noticed at later point that this is not the Debian Installer issue but
caused by kernel command line parameter numa=off passed in the pxe file
which I did not notice earlier.
Closing the ticket as its not really issue.
Cheers,
Vasudev
--- End Message ---