You need to enable accounting, which samples memory use periodically. The task/cgroup plugin will also enforce memory limits soon.
Quoting Mike Schachter <[email protected]>: > > Hi there, > > I'm running the following sbatch job: > > #!/bin/sh > #SBATCH -p all -c 1 --mem=400 > > python -c "import time; import numpy as np; a = np.ones([11000, > 11000]); time.sleep(50);" > > This job should fail - it allocates roughly 935MB of memory. However, > it only fails when --mem=300. Am I misinterpreting how --mem computes > maximum job memory? > > Thanks! > > mike
