I am trying to configure the latest slurm 14.03 and am running into
problem to prevent slurm from running jobs on the control node.

sinfo shows 3 nodes configure in the slurm.conf:
active       up    2:00:00      1  down* hpc-0-5
active       up    2:00:00      1    mix hpc-0-4
active       up    2:00:00      1   idle hpc-0-6


but when I use salloc I end up on the head node


$ salloc -N 1 -p active sh
salloc: Granted job allocation 16
sh-4.1$ hostname
hpcdev-005.sdsc.edu


That node is not part of the "active" partition but slurm still uses it.
How? The allocation btw is for  NodeList=hpc-0-4
and the user can login to that node without a problem but slurm doesn't
run the sh on that node for the user.

Also how can a user find out what nodes are allocated without having to
run the scontrol command? Is there an option in salloc to return the
host names?

Thanks
Eva

Reply via email to