Hello again,
Angel de Vicente via slurm-users
writes:
> [...] I don't understand is why the first three submissions
> below do get stopped by sbatch while the last one happily goes through?
>
>>> ,
>>> | $ sbatch -N 1 -n 1 -c 76 -p short --mem-per-cpu=4000
Hello,
Brian Andrus via slurm-users
writes:
> Unless you are using cgroups and constraints, there is no limit
> imposed.
[...]
> So your request did not exceed what slurm sees as available (1 cpu
> using 4GB), so it is happy to let your script run. I suspect if you
> look at the usage, you wil
Hello,
we found an issue with Slurm 24.05.1 and the MaxMemPerNode
setting. Slurm is installed in a single workstation, and thus, the
number of nodes is just 1.
The relevant sections in slurm.conf read:
,
| EnforcePartLimits=ALL
| PartitionName=short Nodes=. State=UP Default=YES Max