Re: [OMPI users] Building PMIx and Slurm support

2019-03-03 Thread Daniel Letai
Hello, I have built the following stack : centos 7.5 (gcc 4.8.5-28, libevent 2.0.21-4) MLNX_OFED_LINUX-4.5-1.0.1.0-rhel7.5-x86_64.tgz built with --all --without-32bit (this includes ucx 1.5.0) hwloc from centos 7.5 : 1.11.8-4.el7

Re: [OMPI users] Building PMIx and Slurm support

2019-03-03 Thread Andy Riebs
Daniel, I think you need to have "--with-pmix=" point to a specific directory; either "/usr" if you installed it in /usr/lib and /usr/include, or the specific directory, like "--with-pmix=/usr/local/pmix-3.0.2" Andy *Fr

Re: [OMPI users] Building PMIx and Slurm support

2019-03-03 Thread Bennet Fauber
Dani, We have had to specify the path to the external PMIx explicitly when compiling both Slurm and OpenMPI; e.g., --with-pmix=/opt/pmix/3.1.2 That insures the both are referring to the same version. -- bennet On Sun, Mar 3, 2019 at 8:56 AM Daniel Letai wrote: > > Hello, > > > I have bui

Re: [OMPI users] Building PMIx and Slurm support

2019-03-03 Thread Gilles Gouaillardet
Daniel, PMIX_MODEX and PMIX_INFO_ARRAY have been removed from PMIx 3.1.2, and Open MPI 4.0.0 was not ready for this. You can either use the internal PMIx (3.0.2), or try 4.0.1rc1 (with the external PMIx 3.1.2) that was published a few days ago. FWIW, you are right using --with-pmix=external (and

[OMPI users] OpenMPI behavior with Ialltoall and GPUs

2019-03-03 Thread Adam Sylvester
I'm running OpenMPI 4.0.0 built with gdrcopy 1.3 and UCX 1.4 per the instructions at https://www.open-mpi.org/faq/?category=buildcuda, built against CUDA 10.0 on RHEL 7. I'm running on a p2.xlarge instance in AWS (single NVIDIA K80 GPU). OpenMPI reports CUDA support: $ ompi_info --parsable --all

Re: [OMPI users] Building PMIx and Slurm support

2019-03-03 Thread Daniel Letai
Sent from my iPhone > On 3 Mar 2019, at 16:31, Gilles Gouaillardet > wrote: > > Daniel, > > PMIX_MODEX and PMIX_INFO_ARRAY have been removed from PMIx 3.1.2, and > Open MPI 4.0.0 was not ready for this. > > You can either use the internal PMIx (3.0.2), or try 4.0.1rc1 (with > the external P

Re: [OMPI users] Building PMIx and Slurm support

2019-03-03 Thread Gilles Gouaillardet
Daniel, keep in mind PMIx was designed with cross-version compatibility in mind, so a PMIx 3.0.2 client (read Open MPI 4.0.0 app with the internal 3.0.2 PMIx) should be able to interact with a PMIx 3.1.2 server (read SLURM pmix plugin built on top of PMIx 3.1.2). So unless you have a spec

Re: [OMPI users] Building PMIx and Slurm support

2019-03-03 Thread Daniel Letai
Gilles, On 04/03/2019 01:59:28, Gilles Gouaillardet wrote: Daniel, keep in mind PMIx was designed with cross-version compatibility in mind, so a PMIx 3.0.2 client (read Open MPI 4.0.0 app with the internal

Re: [OMPI users] Building PMIx and Slurm support

2019-03-03 Thread Gilles Gouaillardet
Daniel, On 3/4/2019 3:18 PM, Daniel Letai wrote: So unless you have a specific reason not to mix both, you might also give the internal PMIx a try. Does this hold true for libevent too? Configure complains if libevent for openmpi is different than the one used for the other tools. I am n