We’re using cgroups to limit memory of jobs, but in our slurm.conf the total 
node memory capacity is currently specified.

Doing this there could be times when physical memory is over subscribed 
(physical allocation per job plus kernel memory requirements) and then swapping 
will occur.

Is there a recommended “kernel overhead” memory (either % or absolute value) 
that we should deduct from the total physical memory?

thanks,

   -greg

--
Dr. Greg Wickham
Advanced Computing Infrastructure Team Lead
Advanced Computing Core Laboratory
King Abdullah University of Science and Technology
Building #1, Office #0124
greg.wick...@kaust.edu.sa +966 544 700 330
--


________________________________
This message and its contents including attachments are intended solely for the 
original recipient. If you are not the intended recipient or have received this 
message in error, please notify me immediately and delete this message from 
your computer system. Any unauthorized use or distribution is prohibited. 
Please consider the environment before printing this email.

Reply via email to