Carlos,
Open MPI 3.0.2 has been released, and it contains several bug fixes, so I do
encourage you to upgrade and try again.
if it still does not work, can you please run
mpirun --mca oob_base_verbose 10 ...
and then compress and post the output ?
out of curiosity, would
mpirun --mca ro
Nathan,
Same issue with OpenMPI 3.1.1 on POWER9 with GCC 7.2.0 and CUDA9.2.
S.
--
Si Hammond
Scalable Computer Architectures
Sandia National Laboratories, NM, USA
[Sent from remote connection, excuse typos]
On 6/16/18, 10:10 PM, "Nathan Hjelm" wrote:
Try the latest nightly tarball for
Just realized my email wasn't sent to the archive.
On Sat, Jun 23, 2018 at 5:34 PM, carlos aguni wrote:
> Hi!
>
> Thank you all for your reply Jeff, Gilles and rhc.
>
> Thank you Jeff and rhc for clarifying to me some of the openmpi's
> internals.
>
> >> FWIW: we never send interface names to ot
Hi!
Thank you all for your reply Jeff, Gilles and rhc.
Thank you Jeff and rhc for clarifying to me some of the openmpi's internals.
>> FWIW: we never send interface names to other hosts - just dot addresses
> Should have clarified - when you specify an interface name for the MCA
param, then it i
Just wanted to follow up on my own post.
Turns out there was a missing symlink (much embarrassment) on by build host.
That’s why you don’t see “pmix_v1” in the “srun —mpi=list” output (previous
post).
Once I fixed that and rebuilt SLURM, I was able to launch existing OpenMPI 3.x
apps with,
There is a name for my pain and it is “OpenMPI + PMIx”. :)
I’m looking at upgrading SLURM from 16.05.11 to 17.11.05 (bear with me, this is
not a SLURM question).
After building SLURM 17.11.05 with
‘--with-pmix=/opt/pmix/1.1.5:/opt/pmix/2.1/1’ and installing a test instance, I
see
$ srun --mp
Sorry for late response. But I just wanted to inform you that I found
another workaround, unrelated to the method we discussed here.
On 19/06/18 15:26, r...@open-mpi.org wrote:
The OMPI cmd line converts "--mca ptl_tcp_remote_connections 1” to OMPI_MCA_
ptl_tcp_remote_connections, which is not
Hi!
Thank you all for your reply Jeff, Gilles and rhc.
Thank you Jeff and rhc for clarifying to me some of the openmpi's internals.
>> FWIW: we never send interface names to other hosts - just dot addresses
> Should have clarified - when you specify an interface name for the MCA
param, then it i
Hi,
I have successfully built Open MPI version 2.1.3 from scratch in Ubuntu 14.04 64 bit and using GCC 4.9. The result was the following shared libraries (needed for a program to use Open MPI):
dummy@machine:~/$ ldd /home/dummy/openmpi/build/lib/libmpi.so /home/dummy/openmpi/build/lib