_MAC, et al,
Things are looking up. By specifying, --with-verbs=no, things are looking
up. I can run helloworld. But in a new-for-me wrinkle, I can only run on
*more* than one node. Not sure I've ever seen that. Using 40 core nodes,
this:
mpirun -np 41 ./helloWorld.mpi3.SLES12.OMPI400.exe
Hi Matt,
There seem to be two different issues here:
a) The warning message comes from the openib btl. Given that Omnipath has
verbs API and you have the necessary libraries in your system, openib btl finds
itself as a potential transport and prints the warning during its init (openib
Well,
By turning off UCX compilation per Howard, things get a bit better in that
something happens! It's not a good something, as it seems to die with an
infiniband error. As this is an Omnipath system, is OpenMPI perhaps seeing
libverbs somewhere and compiling it in? To wit:
(1006)(master) $
Hi Matt
Definitely do not include the ucx option for an omnipath cluster. Actually
if you accidentally installed ucx in it’s default location use on the
system Switch to this config option
—with-ucx=no
Otherwise you will hit
https://github.com/openucx/ucx/issues/750
Howard
Gilles
Matt,
There are two ways of using PMIx
- if you use mpirun, then the MPI app (e.g. the PMIx client) will talk
to mpirun and orted daemons (e.g. the PMIx server)
- if you use SLURM srun, then the MPI app will directly talk to the
PMIx server provided by SLURM. (note you might have to srun
Hi Matt,
Few comments/questions:
- If your cluster has Omni-Path, you won’t need UCX. Instead you can
run using PSM2, or alternatively OFI (a.k.a. Libfabric)
- With the command you shared below (4 ranks on the local node) (I
think) a shared mem transport is being selected
On Fri, Jan 18, 2019 at 1:13 PM Jeff Squyres (jsquyres) via users <
users@lists.open-mpi.org> wrote:
> On Jan 18, 2019, at 12:43 PM, Matt Thompson wrote:
> >
> > With some help, I managed to build an Open MPI 4.0.0 with:
>
> We can discuss each of these params to let you know what they are.
>
>
On Jan 18, 2019, at 12:43 PM, Matt Thompson wrote:
>
> With some help, I managed to build an Open MPI 4.0.0 with:
We can discuss each of these params to let you know what they are.
> ./configure --disable-wrapper-rpath --disable-wrapper-runpath
Did you have a reason for disabling these?
All,
With some help, I managed to build an Open MPI 4.0.0 with:
./configure --disable-wrapper-rpath --disable-wrapper-runpath --with-psm2
--with-slurm --enable-mpi1-compatibility --with-ucx
--with-pmix=/usr/nlocal/pmix/2.1 --with-libevent=/usr CC=icc CXX=icpc
FC=ifort
The MPI 1 is because I
Dear Open MPI Gurus,
A cluster I use recently updated their SLURM to have support for UCX and
PMIx. These are names I've seen and heard often at SC BoFs and posters, but
now is my first time to play with them.
So, my first question is how exactly should I build Open MPI to try these
features
10 matches
Mail list logo