Re: [slurm-users] Slurm configuration, Weight Parameter

2019-11-23 Thread Chris Samuel
On 23/11/19 9:14 am, Chris Samuel wrote: My gut instinct (and I've never tried this) is to make the 3GB nodes be in a separate partition that is guarded by AllowQos=3GB and have a QOS called "3GB" that uses MinTRESPerJob to require jobs to ask for more than 2GB of RAM to be allowed into the QO

Re: [slurm-users] Slurm configuration, Weight Parameter

2019-11-23 Thread Chris Samuel
On 21/11/19 7:25 am, Sistemas NLHPC wrote: Currently we have two types of nodes, one with 3GB and another with 2GB of RAM, it is required that in nodes of 3 GB it is not allowed to execute tasks with less than 2GB, to avoid underutilization of resources. My gut instinct (and I've never tried

Re: [slurm-users] Force a use job to a node with state=drain/maint

2019-11-23 Thread Chris Samuel
On 23/11/19 8:54 am, René Neumaier wrote: In general, is it possible to move a pending job (means forcing as root) to a specific node which is marked as DRAIN for troubleshooting? I don't believe so. Put a reservation on the node first only for this user, add the reservation to the job then

[slurm-users] Force a use job to a node with state=drain/maint

2019-11-23 Thread René Neumaier
Hello everyone! In general, is it possible to move a pending job (means forcing as root) to a specific node which is marked as DRAIN for troubleshooting? I know, it's not what "DRAIN" normally means. Maybe the reservation is the way to go. But how can I force/forward a specific user job which is

Re: [slurm-users] Environment modules

2019-11-23 Thread William Brown
Agreed, I have just been setting up Lmod on a national compute cluster where I am a non-privileged cluster and on an internal cluster where I have full rights. It works very well, and Lmod can read theTcl module files also. The most recent version has some extra features specially for Slurm. An