Hi,
I've just noticed that our main SLURM partition seems to be twice as full
as I expected, and then I noticed the following message in slurmd.log:
[2013-04-24T10:50:44+03:00] scaling CPU count by factor of 2
All the currently running jobs and verified they're all using NumCPUs=1,
but this log
This is my scontrol show node:
NodeName=pez015 Arch=x86_64 CoresPerSocket=6
CPUAlloc=12 CPUErr=0 CPUTot=12 Features=(null)
Gres=(null)
NodeAddr=pez015 NodeHostName=pez015
OS=Linux RealMemory=48128 Sockets=2
State=ALLOCATED ThreadsPerCore=1 TmpDisk=61440 Weight=1
This was probably added to future versions. I know this is in 2.5.6.
Felip Moll lip...@gmail.com wrote:
This is my scontrol show node:
NodeName=pez015 Arch=x86_64 CoresPerSocket=6
CPUAlloc=12 CPUErr=0 CPUTot=12 Features=(null)
Gres=(null)
NodeAddr=pez015 NodeHostName=pez015
The thread has somewhat been branched for the RAM requirements. Any useful
comments on the #2-#4? I can probably summarize this by running over all
compute nodes with scontrol show host, but that may not be too efficient...
On 25.04.2013, at 17:28, Mario Kadastik mario.kadas...@cern.ch wrote: