Hi Carlos, On 12.12.2013 12:36, Carlos Fenoy wrote:
You can try to define the parameter DefMemPerCPU=400 in your slurm.conf file. This will set a default memory limit for each job to 400MB per core requested. This should be enough to fulfil your requirement
Yes and no... we have a thing like this already in slurm.conf . But this does not prevent Slurm from putting a job for 1 core + 30 GB on a node with 16 cores + 30.3 GB. In this case, I cannot use the remaining 15 cores for other jobs. That's why I do not want this job on the node.
I run into a similar situation, if 16 GB+8 cores are occupied and a job with 1 core + 14 GB is scheduled on the node. So it is really a question of the ratio of remaining usable resources (400 GB/core).
Regards, Ulf -- ___________________________________________________________________ Dr. Ulf Markwardt Technische Universität Dresden Center for Information Services and High Performance Computing (ZIH) 01062 Dresden, Germany Phone: (+49) 351/463-33640 WWW: http://www.tu-dresden.de/zih
smime.p7s
Description: S/MIME Cryptographic Signature
