Re: [OMPI users] Openmpi 2.0.1 build --with-psm2 failed on CentOS 7.2

2016-10-11 Thread Limin Gu
Thank you very much, MAC! Limin On Tue, Oct 11, 2016 at 10:15 PM, Cabral, Matias A < matias.a.cab...@intel.com> wrote: > Building psm2 should not be complicated (in case you cannot find a newer > binary): > > > > https://github.com/01org/opa-psm2 > > > > Note that newer rpm are named hfi1-psm

Re: [OMPI users] Openmpi 2.0.1 build --with-psm2 failed on CentOS 7.2

2016-10-11 Thread Cabral, Matias A
Building psm2 should not be complicated (in case you cannot find a newer binary): https://github.com/01org/opa-psm2 Note that newer rpm are named hfi1-psm* _MAC From: users [mailto:users-boun...@lists.open-mpi.org] On Behalf Of Limin Gu Sent: Tuesday, October 11, 2016 6:44 PM To: Open MPI Us

Re: [OMPI users] Openmpi 2.0.1 build --with-psm2 failed on CentOS 7.2

2016-10-11 Thread Limin Gu
Thanks Gilles! Limin On Tue, Oct 11, 2016 at 9:33 PM, Gilles Gouaillardet wrote: > Limin, > > > It seems libpsm2 provided by Centos 7 is a bit too old > > all symbols are prefixed with psm_, and Open MPI expect they are prefixed > with psm2_ > > i am afraid your only option is to manually inst

Re: [OMPI users] Openmpi 2.0.1 build --with-psm2 failed on CentOS 7.2

2016-10-11 Thread Gilles Gouaillardet
Limin, It seems libpsm2 provided by Centos 7 is a bit too old all symbols are prefixed with psm_, and Open MPI expect they are prefixed with psm2_ i am afraid your only option is to manually install the latest libpsm2 and then configure again with your psm2 install dir Cheers, Gilles

Re: [OMPI users] Openmpi 2.0.1 build --with-psm2 failed on CentOS 7.2

2016-10-11 Thread Limin Gu
Hi MAC, It seems /usr/lib64/libpsm2.so.2 has no symbols. Can configure check some other ways? [root@uranus ~]# rpm -qi libpsm2-0.7-4.el7.x86_64 Name: libpsm2 Version : 0.7 Release : 4.el7 Architecture: x86_64 Install Date: Tue 11 Oct 2016 05:45:59 PM PDT Group : Syste

Re: [OMPI users] Using Open MPI with multiple versions of GCC and G++

2016-10-11 Thread Gilles Gouaillardet
FWIW. mpicxx does two things : 1) use the C++ compiler (e.g. g++) 2) if Open MPI was configured with (deprecated) C++ bindings (e.g. --enable-mpi-cxx), then link with the Open MPI C++ library that contains bindings. IIRC, Open MPI v1.10 does build C++ bindings by default, but v2.0 does n

Re: [OMPI users] Openmpi 2.0.1 build --with-psm2 failed on CentOS 7.2

2016-10-11 Thread Jeff Squyres (jsquyres)
Limin -- Can you send the items listed here: https://www.open-mpi.org/community/help/ > On Oct 11, 2016, at 4:00 PM, Cabral, Matias A > wrote: > > Hi Limin, > > psm2_mq_irecv2 should be in libpsm2.so. I’m not quite sure how CentOS packs > it so I would like a little more info about

Re: [OMPI users] Openmpi 2.0.1 build --with-psm2 failed on CentOS 7.2

2016-10-11 Thread Cabral, Matias A
Hi Limin, psm2_mq_irecv2 should be in libpsm2.so. I’m not quite sure how CentOS packs it so I would like a little more info about the version being used. Some things to share: >rpm -qi libpsm2-0.7-4.el7.x86_64 > objdump –p /usr/lib64/libpsm2.so |grep SONAME >nm /usr/lib64/libpsm2.so |grep psm

[OMPI users] Openmpi 2.0.1 build --with-psm2 failed on CentOS 7.2

2016-10-11 Thread Limin Gu
Hi All, I am trying to build openmpi 2.0.1 on a CentOS 7.2 system, and I have following libpsm2 packages installed: libpsm2-0.7-4.el7.x86_64 libpsm2-compat-0.7-4.el7.x86_64 libpsm2-compat-devel-0.7-4.el7.x86_64 libpsm2-devel-0.7-4.el7.x86_64 I added --with-psm2 to my configure, but it failed: -

Re: [OMPI users] Crash during MPI_Finalize

2016-10-11 Thread Jeff Squyres (jsquyres)
On Oct 11, 2016, at 8:58 AM, George Reeke wrote: > > George B. et al, > --Is it normal to top-post on this list? I am following your > example but other lists I am on prefer bottom-posting. Stylistic note: we do both on this list. Specifically: there's no religious hate if you top-post. --

Re: [OMPI users] Crash during MPI_Finalize

2016-10-11 Thread George Reeke
George B. et al, --Is it normal to top-post on this list? I am following your example but other lists I am on prefer bottom-posting. --I attach the complete code of the andmsg program, as it is quite short (some bits removed for brevity and I have omitted my headers and startup function anin

Re: [OMPI users] centos 7.2 openmpi from repo, stdout issue

2016-10-11 Thread Emre Brookes
FYI - We upgraded to Open MPI 2.0.1 and this resolved the issue. Of course, it was not so simple to get there, as the Centos 7.2 default gcc (4.8.4) produced "internal compiler error" when recompiling NAMD with OMPI 2.0.1 and 1.10.4. So we had to install a newer compiler. One interesting re

Re: [OMPI users] Using Open MPI with multiple versions of GCC and G++

2016-10-11 Thread Dave Love
"Jeff Squyres (jsquyres)" writes: > Especially with C++, the Open MPI team strongly recommends you > building Open MPI with the target versions of the compilers that you > want to use. Unexpected things can happen when you start mixing > versions of compilers (particularly across major versions

Re: [OMPI users] Launching hybrid MPI/OpenMP jobs on a cluster: correct OpenMPI flags?

2016-10-11 Thread Dave Love
Wirawan Purwanto writes: > Instead of the scenario above, I was trying to get the MPI processes > side-by-side (more like "fill_up" policy in SGE scheduler), i.e. fill > node 0 first, then fill node 1, and so on. How do I do this properly? > > I tried a few attempts that fail: > > $ export OMP_NU

Re: [OMPI users] what was the rationale behind rank mapping by socket?

2016-10-11 Thread Dave Love
Gilles Gouaillardet writes: > Bennet, > > > my guess is mapping/binding to sockets was deemed the best compromise > from an > > "out of the box" performance point of view. > > > iirc, we did fix some bugs that occured when running under asymmetric > cpusets/cgroups. > > if you still have some iss

Re: [OMPI users] MPI Behaviour Question

2016-10-11 Thread Reuti
Hi, > Am 11.10.2016 um 14:56 schrieb Mark Potter : > > This question is related to OpenMPI 2.0.1 compiled with GCC 4.8.2 on > RHEL 6.8 using Torque 6.0.2 with Moab 9.0.2. To be clear, I am an > administrator and not a coder and I suspect this is expected behavior > but I have been asked by a clie

Re: [OMPI users] MPI Behaviour Question

2016-10-11 Thread Gilles Gouaillardet
Mark, My understanding is that shell meta expansion occurs once on the first node, so from an Open MPI point of view, you really invoke mpirun echo node0 I suspect mpirun echo 'Hello from $(hostname)' Is what you want to do I do not know about mpirun echo 'Hello from $HOSTNAME' $HOSTNAME might be

[OMPI users] MPI Behaviour Question

2016-10-11 Thread Mark Potter
This question is related to OpenMPI 2.0.1 compiled with GCC 4.8.2 on RHEL 6.8 using Torque 6.0.2 with Moab 9.0.2. To be clear, I am an administrator and not a coder and I suspect this is expected behavior but I have been asked by a client to explain why this is happening. Using Torque, the followi