Make tmpfs a TRES, and have NHC update that as in:
scontrol update nodename=... gres=tmpfree:$(stat -f /tmp -c
"%f*%S" | bc)"
Replace /tmp with your tmpfs mount.
You'll have to define that TRES in slurm.conf and gres.conf as
usual (start with count=1 and
Hi Michael,
Thanks for the suggestion! We have user requests for certain types of
jobs (quantum chemistry) that require fairly large local scratch space.
Our jobs normally do not have this requirement. So unfortunately the
per-node NHC check doesn't seem to do the trick. (We already have an
On 9/4/19 9:40 AM, Sam Gallop (NBI) wrote:
I did play around with XFS quotas on our large systems (SGI UV300, HPE MC990-X
and Superdome Flex) but it couldn't get it working how I wanted (or how I
thought it should work). I'll re-visit it knowing that other people have got
XFS quotas working.
On Monday, 02 September 2019, at 20:02:57 (+0200),
Ole Holm Nielsen wrote:
> We have some users requesting that a certain minimum size of the
> *Available* (i.e., free) TmpFS disk space should be present on nodes
> before a job should be considered by the scheduler for a set of
> nodes.
>
> I bel
iS Support & Development
-Original Message-
From: slurm-users On Behalf Of Chris
Samuel
Sent: 04 September 2019 07:50
To: slurm-users@lists.schedmd.com
Subject: Re: [slurm-users] How can jobs request a minimum available (free)
TmpFS disk space?
On Monday, 2 September 2019 11:02:57 A
On Monday, 2 September 2019 11:02:57 AM PDT Ole Holm Nielsen wrote:
> We have some users requesting that a certain minimum size of the
> *Available* (i.e., free) TmpFS disk space should be present on nodes
> before a job should be considered by the scheduler for a set of nodes.
At Swinburne I did
* Ole Holm Nielsen [190903 11:14]:
> How do you dynamically update your gres=localtmp resource according to the
> current disk free space? I mean, there is already a TmpFS disk space size
> defined in slurm.conf, so how does your gres=localtmp differ from TmpFS?
Dear Ole,
I think (but please c
Juergen Salk writes:
> We are also going to implement disk quotas for the amount of local
> scratch space that has been allocated for the job by means of generic
> resources (e.g. `--gres=scratch:100´ for 100GB). This is especially
> important when several users share a node.
Indeed.
> This lea
Ole Holm Nielsen writes:
> I figured that other sites need the free disk space feature as well
> :-)
:)
> How do you dynamically update your gres=localtmp resource according to
> the current disk free space? I mean, there is already a TmpFS disk
> space size defined in slurm.conf, so how does
Dear Bjørn-Helge,
this is unfortunately no answer to the question but I'd be glad to
hear some more thoughts on that, too.
We are also going to implement disk quotas for the amount of local
scratch space that has been allocated for the job by means of generic
resources (e.g. `--gres=scratch:100´
Hi Bjørn-Helge,
I figured that other sites need the free disk space feature as well :-)
How do you dynamically update your gres=localtmp resource according to
the current disk free space? I mean, there is already a TmpFS disk
space size defined in slurm.conf, so how does your gres=localtmp di
We are facing more or less the same problem. We have historically
defined a Gres "localtmp" with the number of GB initially available
on local disk, and then jobs ask for --gres=localtmp:50 or similar.
That prevents slurm from allocating jobs on the cluster if they ask for
more disk than is curre
We have some users requesting that a certain minimum size of the
*Available* (i.e., free) TmpFS disk space should be present on nodes
before a job should be considered by the scheduler for a set of nodes.
I believe that the "sbatch --tmp=size" option merely refers to the TmpFS
file system *Siz
13 matches
Mail list logo