Hi,
what we do is, roughly, a combination of your options #2 and #3. To start with,
however, I'd like to point out that we're using Lmod instead of the old Tcl
environment-modules. I'd really recommend you to do the same.
So basically, we have our modules available on NFS, both the module
Hi again :)
Or https://github.com/SchedMD/slurm/blob/master/NEWS in case what you were
interested in was 17.02.
--
Janne Blomqvist
From: Blomqvist Janne
Sent: Friday, October 28, 2016 17:46
To: slurm-dev
Subject: RE: [slurm-dev] Re: Slurm versions 16.05.6
Hi,
try https://raw.githubusercontent.com/SchedMD/slurm/slurm-16.05/NEWS
--
Janne Blomqvist
From: Alexandre Strube [su...@surak.eti.br]
Sent: Friday, October 28, 2016 16:45
To: slurm-dev
Subject: [slurm-dev] Re: Slurm versions 16.05.6 and 17.02.0-pre3 are now
Hi,
AFAIU the major optimization wrt. array job scheduling is that if the scheduler
finds that it cannot schedule a job in a job array, it skips over all the rest
of the jobs in the array. There's also some memory benefits, e.g. a pending job
array is stored as a single object in the job
Hi,
much as I'd be sad to see my baby go, I agree with you that fair-tree is a
better choice. So, +1.
Furthermore, could the fair-tree algorithm be made the default when
priority/multifactor is used? Is there any case where the current default is
objectively better than fair-tree?
--
Janne
Hi,
as was already explained, the users/group DB must be available on all nodes.
For your other question, why there is munge?, munge provides a mechanism to
solve the problem of a slurm daemon receiving a message claiming to be from
UID=12345; how can it verify that this is true and not a
11.2
Hi,
Yes, that seems to be the problem. Can someone point me in the right direction
to change the compile scripts? I'm willing to put some effort into this, but I
don't really know where to start.
- Jan
On Tue, Sep 30, 2014 at 10:59 PM, Blomqvist Janne
janne.blomqv
Hi,
xcgroup.c (src/slurmd/common/xcgroup.c) handles the Linux-specific cgroups
(https://en.wikipedia.org/wiki/Cgroups) stuff. Judging from your error message
it appears that the solaris mount() function is different from the Linux one.
But it doesn't really matter anyway, since cgroups don't
Hi,
if I understand it correctly, this is actually very close to Dominant Resource
Fairness (DRF) which I mentioned previously, with the difference that in DRF
the charge rates are determined automatically from the available resources (in
a partition) rather than being specified explicitly by
Hi,
FWIW we're hitting this bug as well with 14.03.5. 14.03.4 was fine, so this
seems to be a recent regression. Luckily per bugzilla the bug has already been
fixed.
--
Janne Blomqvist
From: Markus Blank-Burian [bur...@muenster.de]
Sent: Tuesday,
Hi,
see also http://bugzilla.schedmd.com/show_bug.cgi?id=443
--
Janne Blomqvist
From: David Bigagli [da...@schedmd.com]
Sent: Monday, October 28, 2013 20:54
To: slurm-dev
Subject: [slurm-dev] Re: Job Dependencies on Arrays
Hi, this feature is currently
Hi,
we had a slightly similar setup on an older cluster;
- The frontend and the home directories were the same system
- Home dirs were physically under /export/home
- Compute nodes mounted the home dirs over NFS using the automounter at
/home/$USER
This sounds a bit like the situation you
Hi,
if you're on slurm 2.5 or newer, I recommend taking a look at the
priority_multifactor2 plugin. It differs from the original priority_multifactor
plugin by using a different algorithm for calculating fair-share priorities
which makes a difference particularly when using account
13 matches
Mail list logo