[slurm-users] Re: Bug? sbatch not respecting MaxMemPerNode setting

2024-09-05 Thread Angel de Vicente via slurm-users
Hello again, Angel de Vicente via slurm-users writes: > [...] I don't understand is why the first three submissions > below do get stopped by sbatch while the last one happily goes through? > >>> , >>> | $ sbatch -N 1 -n 1 -c 76 -p short --mem-per-cpu=4000

[slurm-users] Re: Bug? sbatch not respecting MaxMemPerNode setting

2024-09-05 Thread Angel de Vicente via slurm-users
Hello, Brian Andrus via slurm-users writes: > Unless you are using cgroups and constraints, there is no limit > imposed. [...] > So your request did not exceed what slurm sees as available (1 cpu > using 4GB), so it is happy to let your script run. I suspect if you > look at the usage, you wil

[slurm-users] Bug? sbatch not respecting MaxMemPerNode setting

2024-09-04 Thread Angel de Vicente via slurm-users
Hello, we found an issue with Slurm 24.05.1 and the MaxMemPerNode setting. Slurm is installed in a single workstation, and thus, the number of nodes is just 1. The relevant sections in slurm.conf read: , | EnforcePartLimits=ALL | PartitionName=short Nodes=. State=UP Default=YES Max