Several people asked me to bump EPEL's slurm up from 20.11.2, mostly due to
mpi-related issueswith that release, so I've got 20.11.5 on deck 4 days left
to stable. Please protect your private slurminstallations so there are no
surprises when this release hits the EPEL repos in 4 days.
Phil
30, Newark
`'
> On Feb 3, 2021, at 1:06 PM, Philip Kovacs wrote:
>
> I am familiar with the package rename process and it would not have the
> effect you might think it would.
> If I provide an upgrade path to a new package name, e.g. slurm-xxx, the net
> effect would be to tell
, in the long run, to follow the
Fedora packaging guidelines for renaming existing packages?
https://docs.fedoraproject.org/en-US/packaging-guidelines/#renaming-or-replacing-existing-packages
Best regards
Jürgen
On 03.02.21 01:58, Philip Kovacs wrote:
> Lots of mixed reactions here, m
Lots of mixed reactions here, many in favor (and grateful) for the add to
EPEL, many much less enthusiastic.
I cannot rename an EPEL package that is now in the wild without providing an
upgrade path to the new name. Such an upgrade path would defeat the purpose of
the rename and won't help at
I can assure you it was easier for you to filter slurm from your repos than it
was for me to make them available to both epel7 and epel8.
No good deed goes unpunished I guess.On Saturday, January 23, 2021,
07:03:08 AM EST, Ole Holm Nielsen wrote:
We use the EPEL yum repository on our
Make sure the .so symlink for the pmix lib is available -- not just the
versioned .so, e.g. .so.2. Slurm requires that .so symlink. Some distros
split packages into base/devel, so you may need to install a pmix-devel
package, if available, in order to add the .so symlink (which is
There's a typo in there. It's lazy not -lazy. Try adding exactly this line
just before the %configure:
# use -z lazy to allow dlopen with unresolved symbolsexport
LDFLAGS="%{build_ldflags} -Wl,-z,lazy" <--- this should fix
it%configure \
On Sunday, December 8, 2019,
I answered this question on Oct 28. Simply use lazy binding as required by
slurm. See a copy below of my Oct 28 response to your original thread.Just
adjust the %build section of the rpm spec to ensure that -Wl,-z,-lazy appears
at the end of LDFLAGS. Problem solved.
> You probably built
>On Monday, October 28, 2019, 03:18:06 PM EDT, Brian Andrus
> wrote:
>I spoke too soon.
>While I can successfully build/run slurmctld, slurmd is failing because ALL of
>the SelectType libraries are missing symbols.
>Example from select_cons_tres.so:
># slurmd
>slurmd: error:
>For our next cluster we will switch from Moab/Torque to Slurm and have
>to adapt the documentation and example batch scripts for the users.
>Therefore, I wonder if and why we should recommend (or maybe even urge)
>our users to use srun instead of mpirun/mpiexec in their batch scripts
>for MPI
>according to https://slurm.schedmd.com/mpi_guide.html I have built
>Slurm 19.05 with PMIx support enabled and it seems to work for both,
>OpenMPI and Intel MPI. (I've also set MpiDefault=pmix in slurm.conf.)
>But I still don't get the point. Why should I favour `srun ./my_mpi_program´
>over
>I have tried running ldconfig manually as suggested with slurm-19.05.1-2 and
>it fails the same way... >error: Failed dependencies:>
>libnvidia-ml.so.1()(64bit) is needed by slurm-19.05.1-2.el7.centos.x86_64
Lou, that's a packaging mistake on the part of the person who created that
Looks like you need to install hdf5, development headers and libraries.
On Tuesday, July 23, 2019, 08:52:06 PM EDT, Weiguang Chen
wrote:
Hi,
I’m installing slurm in myArchlinux Server.
At the beginning, I used AUR helper yaourt to install it.
yaourt -S slurm-llnl
But an
Well it looks like it it does fail as often as it works.
srun --mpi=pmix -n1 -wporthos : -n1 -wathos ./hellosrun: job 681 queued and
waiting for resourcessrun: job 681 has been allocated resourcesslurmstepd:
error: athos [0] pmixp_coll_ring.c:613 [pmixp_coll_ring_check] mpi/pmix: ERROR:
Works here on slurm 18.08.8, pmix 3.1.2. The mpi world ranks are unified as
they should be.
$ srun --mpi=pmix -n2 -wathos ./hello : -n8 -wporthos ./hellosrun: job 586
queued and waiting for resourcessrun: job 586 has been allocated resourcesHello
world from processor athos, rank 1 out of 10
Also look for the presence of the slurm mpi plugins: mpi_none.so,
mpi_openmpi.so, mpi_pmi2.so, mpi_pmix.so, mpi_pmix_v3.so, They will be
installed typically to /usr/lib64/slurm/. Those plugins are used for the
various mpi capabilities and are good "markers"for how your configure detected
As one of the downstream distro packagers, I follow both the tarball and rpm
revisions carefully. Please be aware that changeslike the one proposed impact
us and ought not be made without some announcement so we can know what is going
on and adjustour packaging code accordingly. Right now
libpmi libraries we export (they are
nothing more than symlinks to libpmix), and (b) specify --mpi=pmix on the srun
cmd line.
On Dec 21, 2017, at 11:44 AM, Philip Kovacs <pkde...@yahoo.com> wrote:
OK, so slurm's libpmi2 is a functional superset of the libpmi2 provided by pmix
2.0+. That's go
into the plugin.
On Wednesday, December 20, 2017 10:47 PM, "r...@open-mpi.org"
<r...@open-mpi.org> wrote:
On Dec 20, 2017, at 6:21 PM, Philip Kovacs <pkde...@yahoo.com> wrote:
> -- slurm.spec: move libpmi to a separate package to solve a conflict with the
>
> -- slurm.spec: move libpmi to a separate package to solve a conflict with the
> version provided by PMIx. This will require a separate change to PMIx as
> well.
I see the intention behind this change since the pmix 2.0+ package provides
libpmi/libpmi2and there is a possible
20 matches
Mail list logo