Hi Mike,
We just ended up using two reservations; one for Monday, one for Tuesday.
Glenn G Amspaugh
On Sep 4, 2019, at 11:38 AM, Hanby, Mike
mailto:mha...@uab.edu>> wrote:
Howdy,
Running Slurm 18.08.8
We have a request to create a 2 node reservation for a class that will meet
every Tues and
Thanks Brian! I'll take a look at weights.
I want others to be able to use them and take advantage of the large
memory when free. We have a preemptable partiton below that works great.
PartitionName=scavenge
AllowGroups=ALL AllowAccounts=ALL AllowQos=scavenge,abc
AllocNodes=ALL Default=NO Q
Howdy,
Running Slurm 18.08.8
We have a request to create a 2 node reservation for a class that will meet
every Tues and Thus this semester from 8AM to 9:15AM.
Is there a way to create a reservation match that, or is the closest we can do
is create a weekday reservation for that timeframe, i.e.
(Added a subject)
Tina,
If you want group xxx to be the only ones to access them, you need to
either put them in their own partition or add info to the node
definitions to only allow certain users/groups.
If you want them to be used last, so they are available until all the
other nodes are
Hi,
I'm adding a bunch of memory on two of our nodes that are part of a blade
chassis. So two computes will be upgraded to 1TB RAM and the rest have
192GB. All of the nodes belog to several partitons and can be used by our
paid members given the partition below. I'm looking for ways to figure out
Hi Tina,
I think you could just have a qos called "override" that has no limits, or
maybe just high limits. Then, just modify the job's qos to be "override" with
scontrol. Based on your setup, you may also have to update the jobs account to
an "override" type account with no limits.
We do this
Just to add the conversation. We also wrote our own GRES/plugin for this.
Similarly the GRES enables the user to select the amount of GB units that they
require. The plugin part invokes LVM to create a logical volume on an SSD
device for the requested size. The volume is then made available to t
i am trying to figure out if following is possible:
submit job asking for 60GB of memory - job starts running and I realize
that I only need 20GB of memory. can i rescale this job to reflect this new
(lesser) need for memory?
one of the slurm admins pointed me to
*SchedulerParameters=permit_job_e