Hi Kevin,
We fixed the issue on github. Thanks!
Best,
Chris
—
Christopher Coffey
High-Performance Computing
Northern Arizona University
928-523-1167
On 6/17/19, 8:56 AM, "slurm-users on behalf of Christopher Benjamin Coffey"
wrote:
Thanks Kevin, we'll put a fix in for that.
Also look for the presence of the slurm mpi plugins: mpi_none.so,
mpi_openmpi.so, mpi_pmi2.so, mpi_pmix.so, mpi_pmix_v3.so, They will be
installed typically to /usr/lib64/slurm/. Those plugins are used for the
various mpi capabilities and are good "markers"for how your configure detected
Hi Palle,
You should probably get the latest stable SLURM version from
www.schedmd.com and use the build/install instructions found there. Note
that you should check for WARNING messages in the config.log produced by
SLURM's configure, as they're the best place to find you've missing
packages
We don't do anything. In our environment it is the user's
responsibility to optimize their code appropriately. Since we have a
great variety of hardware any modules we build (we have several thousand
of them) are all build generically. If people want processor specific
optimizations then
...ah, got it. I was confused by "PI/Lab nodes" in your partition list.
Our QoS/account pair for each investigator condo is our approximate
equivalent of what you're doing with owned partitions.
Since we have everything in one partition we segregate processor types via
topology.conf. We break up
I don't know off hand. You can sort of construct a similar system in
Slurm, but I've never seen it as a native option.
-Paul Edmon-
On 6/20/19 10:32 AM, John Hearns wrote:
Paul, you refer to banking resources. Which leads me to ask are
schemes such as Gold used these days in Slurm?
Gold was
Paul, you refer to banking resources. Which leads me to ask are schemes
such as Gold used these days in Slurm?
Gold was a utility where groups could top up with a virtual amount of money
which would be spent as they consume resources.
Altair also wrote a similar system for PBS, which they offered
People will specify which partition they need or if they want multiple
they use this:
#SBATCH -p general,shared,serial_requeue
As then the scheduler will just select which partition they will run in
first. Naturally there is a risk that you will end up running in a more
expensive partition.
Dear all,
I have been following this mailinglist for some time, and as a complete
newbie using Slurm I have learned some lessons from you.
I have an issue with building and configuring Slurm to use OpenMPI.
When running srun for some task I get the error stating that Slurm has
not been
Dear all,
I have been following this mailinglist for some time, and as a complete
newbie using Slurm I have learned some lessons from you.
I have an issue with building and configuring Slurm to use OpenMPI.
When running srun for some task I get the error stating that Slurm has
not been
On 20/6/19 3:24 am, Brian Andrus wrote:
Can you give the exact command/output you have from this?
I suspect a typo in your slurm.conf for nodenames or what you are typing.
Brian Andrus
Hi Brian,
I am pretty sure there is no error in my typing of the commands, but
just in case find below
Janne, thankyou. That FGCI benchmark in a container is pretty smart.
I always say that real application benchmarks beat synthetic benchmarks.
Taking a small mix of applications like that and taking a geometric mean is
great.
Note: *"a reference result run on a Dell PowerEdge C4130"*
In the old
On 19/06/2019 22.30, Fulcomer, Samuel wrote:
>
> (...and yes, the name is inspired by a certain OEM's software licensing
> schemes...)
>
> At Brown we run a ~400 node cluster containing nodes of multiple
> architectures (Sandy/Ivy, Haswell/Broadwell, and Sky/Cascade) purchased
> in some cases by
13 matches
Mail list logo