Wouldn't fairshare with a 90/10 split achieve this?


This will require accounting is set in your cluster, with the following parameters:


In slurm.conf set

AccountingStorageEnforce=associations # And possibly '...,limits,qos,safe' as required - so perhaps just use '=all'

PriorityType=priority/multifactor # Required by other parameters

PriorityDecayHalfLife=14-0 # Every 14 days (two weeks) reset usage

PriorityWeightFairshare=1 # With all other weights defaulting to 0, ensures only fairshare influences priority.

TRESBillingWeights="Node=1" # According to docs, "Node" should be a TRES. I've never tested this.


And from cmdline add the fair share split via:


sacctmgr create account name=A fairshare=10

sacctmgr create account name=B fairshare=90


Than simply associate users to each account, and use something like 'sbatch --account=A ... '  to charge jobs to accounts.


This won't do exactly what you want - it might allow 'A' to utilize more than 10%, if the cluster is under utilized.


I'm not aware of a scheme where 'A' might be preempted only if it has been awarded more than it's fair share due to underutilization.

If the 10% hard limit is a concern, it might be worth to investigate reservations, and allocate to 'A' only from a 10% reservation, while somehow allowing 'B' to utilize that reservation too if required.



On 30/08/2019 14:14:16, Stefan Staeglich wrote:
Hi,

we have some compute nodes paid by different project owners. 10% are owned by 
project A and 90% are owned by project B.

We want to implement the following policy such that every certain time period 
(e.g. two weeks):
- Project A doesn't use more than 10% of the cluster in this time period
- But project B is allowed to use more than 90%

What's the best way to enforce this?

Best,
Stefan
-- 
HTH,

--Dani_L.

Reply via email to