Re: [OMPI users] MPI_IN_PLACE

2017-07-26 Thread Gilles Gouaillardet
Thanks Jeff for your offer, i will contact you off-list later i tried a gcc+gfortran and gcc+ifort on both linux and OS X so far, only gcc+ifort on OS X is failing i will try icc+ifort on OS X from now short story, MPI_IN_PLACE is not recognized as such by the ompi fortran wrapper, and i do not

Re: [OMPI users] OpenMPI 1.10.7 and Infiniband

2017-07-26 Thread Russell Dekema
Are you sure your InfiniBand network is up and running? What kind of output do you get if you run the command 'ibv_devinfo'? Sincerely, Rusty Dekema On Wed, Jul 26, 2017 at 2:40 PM, Sajesh Singh wrote: > OS: Centos 7 > > Infiniband Packages from OS repos > > Mellanox HCA > > >

Re: [OMPI users] MPI_IN_PLACE

2017-07-26 Thread Volker Blum
Thanks! That’s great. Sounds like the exact combination I have here. Thanks also to George. Sorry that the test did not trigger on a more standard platform - that would have simplified things. Best wishes Volker > On Jul 27, 2017, at 3:56 AM, Gilles Gouaillardet wrote: > >

Re: [OMPI users] MPI_IN_PLACE

2017-07-26 Thread Jeff Hammond
Does this happen with ifort but not other Fortran compilers? If so, write me off-list if there's a need to report a compiler issue. Jeff On Wed, Jul 26, 2017 at 6:59 PM Gilles Gouaillardet wrote: > Folks, > > > I am able to reproduce the issue on OS X (Sierra) with stock gcc

Re: [OMPI users] MPI_IN_PLACE

2017-07-26 Thread Gilles Gouaillardet
Folks, I am able to reproduce the issue on OS X (Sierra) with stock gcc (aka clang) and ifort 17.0.4 i will investigate this from now Cheers, Gilles On 7/27/2017 9:28 AM, George Bosilca wrote: Volker, Unfortunately, I can't replicate with icc. I tried on a x86_64 box with Intel

Re: [OMPI users] Questions about integration with resource distribution systems

2017-07-26 Thread r...@open-mpi.org
Oh no, that's not right. Mpirun launches daemons using qrsh and those daemons spawn the app's procs. SGE has no visibility of the app at all Sent from my iPad > On Jul 26, 2017, at 7:46 AM, Kulshrestha, Vipul > wrote: > > Thanks Reuti & RHC for your responses. >

Re: [OMPI users] MPI_IN_PLACE

2017-07-26 Thread George Bosilca
Volker, Unfortunately, I can't replicate with icc. I tried on a x86_64 box with Intel compiler chain 17.0.4 20170411 to no avail. I also tested the 3.0.0-rc1 tarball and the current master, and you test completes without errors on all cases. Once you figure out an environment where you can

[OMPI users] OpenMPI 1.10.7 and Infiniband

2017-07-26 Thread Sajesh Singh
OS: Centos 7 Infiniband Packages from OS repos Mellanox HCA Compiled openmpi 1.10.7 on centos7 with the following config ./configure --prefix=/usr/local/software/OpenMPI/openmpi-1.10.7 --with-tm=/opt/pbs --with-verbs Snippet from config.log seems to indicate that the infiniband header files

Re: [OMPI users] MPI_IN_PLACE

2017-07-26 Thread Volker Blum
Thanks! Yes, trying with Intel 2017 would be very nice. > On Jul 26, 2017, at 6:12 PM, George Bosilca wrote: > > No, I don't have (or used where they were available) the Intel compiler. I > used clang and gfortran. I can try on a Linux box with the Intel 2017 >

Re: [OMPI users] MPI_IN_PLACE

2017-07-26 Thread George Bosilca
No, I don't have (or used where they were available) the Intel compiler. I used clang and gfortran. I can try on a Linux box with the Intel 2017 compilers. George. On Wed, Jul 26, 2017 at 11:59 AM, Volker Blum wrote: > Did you use Intel Fortran 2017 as well? > > (I’m

Re: [OMPI users] MPI_IN_PLACE

2017-07-26 Thread Volker Blum
Did you use Intel Fortran 2017 as well? (I’m asking because I did see the same issue with a combination of an earlier Intel Fortran 2017 version and OpenMPI on an Intel/Infiniband Linux HPC machine … but not Intel Fortran 2016 on the same machine. Perhaps I can revive my access to that

Re: [OMPI users] MPI_IN_PLACE

2017-07-26 Thread Volker Blum
Thanks! I tried ‘use mpi’, which compiles fine. Same result as with ‘include mpif.h', in that the output is * MPI_IN_PLACE does not appear to work as intended. * Checking whether MPI_ALLREDUCE works at all. * Without MPI_IN_PLACE, MPI_ALLREDUCE appears to work. Hm. Any other thoughts?

Re: [OMPI users] MPI_IN_PLACE

2017-07-26 Thread Gilles Gouaillardet
Volker, With mpi_f08, you have to declare Type(MPI_Comm) :: mpi_comm_global (I am afk and not 100% sure of the syntax) A simpler option is to use mpi Cheers, Gilles Volker Blum wrote: >Hi Gilles, > >Thank you very much for the response! > >Unfortunately, I don’t have

Re: [OMPI users] MPI_IN_PLACE

2017-07-26 Thread Volker Blum
Hi Gilles, Thank you very much for the response! Unfortunately, I don’t have access to a different system with the issue right now. As I said, it’s not new; it just keeps creeping up unexpectedly again on different platforms. What puzzles me is that I’ve encountered the same problem with low

Re: [OMPI users] Questions about integration with resource distribution systems

2017-07-26 Thread Kulshrestha, Vipul
Thanks Reuti & RHC for your responses. My application does not relies on the actual value of m_mem_free and I used this as an example, in open source SGE environment, we use mem_free resource. Now, I understand that SGE will allocate requested resources (based on qsub options) and then launch

Re: [OMPI users] Questions about integration with resource distribution systems

2017-07-26 Thread Reuti
> Am 26.07.2017 um 15:09 schrieb r...@open-mpi.org: > > mpirun doesn’t get access to that requirement, nor does it need to do so. SGE > will use the requirement when determining the nodes to allocate. m_mem_free appears to come from Univa GE and is not part of the open source versions. So I

Re: [OMPI users] Questions about integration with resource distribution systems

2017-07-26 Thread Reuti
Hi, > Am 26.07.2017 um 15:03 schrieb Kulshrestha, Vipul > : > > Thanks for a quick response. > > I will try building OMPI as suggested. > > On the integration with unsupported distribution systems, we cannot use > script based approach, because often these

Re: [OMPI users] Questions about integration with resource distribution systems

2017-07-26 Thread r...@open-mpi.org
mpirun doesn’t get access to that requirement, nor does it need to do so. SGE will use the requirement when determining the nodes to allocate. mpirun just uses the nodes that SGE provides. What your cmd line does is restrict the entire operation on each node (daemon + 8 procs) to 40GB of

Re: [OMPI users] Groups and Communicators

2017-07-26 Thread George Bosilca
Diego, As all your processes are started under the umbrella of a single mpirun, they have a communicator in common, the MPI_COMM_WORLD. One possible implementation, using MPI_Comm_split, will be the following: MPI_Comm small_comm, leader_comm; /* Create small_comm on all processes */ /* Now

Re: [OMPI users] Questions about integration with resource distribution systems

2017-07-26 Thread Kulshrestha, Vipul
Thanks for a quick response. I will try building OMPI as suggested. On the integration with unsupported distribution systems, we cannot use script based approach, because often these machines don’t have ssh permission in customer environment. I will explore the path of writing orte component.

Re: [OMPI users] MPI_IN_PLACE

2017-07-26 Thread Gilles Gouaillardet
Volker, thanks, i will have a look at it meanwhile, if you can reproduce this issue on a more mainstream platform (e.g. linux + gfortran) please let me know. since you are using ifort, Open MPI was built with Fortran 2008 bindings, so you can replace include 'mpif.h' with use mpi_f08 and who

Re: [OMPI users] Questions about integration with resource distribution systems

2017-07-26 Thread Reuti
Hi, > Am 26.07.2017 um 02:16 schrieb r...@open-mpi.org: > > >> On Jul 25, 2017, at 3:48 PM, Kulshrestha, Vipul >> wrote: >> >> I have several questions about integration of openmpi with resource queuing >> systems. >> >> 1. >> I understand that openmpi

Re: [OMPI users] Questions about integration with resource distribution systems

2017-07-26 Thread Reuti
Hi, > Am 26.07.2017 um 00:48 schrieb Kulshrestha, Vipul > : > > I have several questions about integration of openmpi with resource queuing > systems. > > 1. > I understand that openmpi supports integration with various resource > distribution systems such as

Re: [OMPI users] Groups and Communicators

2017-07-26 Thread Diego Avesani
Dear George, Dear all, I use "mpirun -np xx ./a.out" I do not know if I have some common grounds. I mean, I have to design everything from the begging. You can find what I would like to do in the attachment. Basically, an MPI cast in another MPI. Consequently, I am thinking to MPI groups or MPI

Re: [OMPI users] MPI_IN_PLACE

2017-07-26 Thread Volker Blum
Dear Gilles, Thank you very much for the fast answer. Darn. I feared it might not occur on all platforms, since my former Macbook (with an older OpenMPI version) no longer exhibited the problem, a different Linux/Intel Machine did last December, etc. On this specific machine, the configure

Re: [OMPI users] MPI_IN_PLACE

2017-07-26 Thread Gilles Gouaillardet
Volker, i was unable to reproduce this issue on linux can you please post your full configure command line, your gnu compiler version and the full test program ? also, how many mpi tasks are you running ? Cheers, Gilles On Wed, Jul 26, 2017 at 4:25 PM, Volker Blum

[OMPI users] MPI_IN_PLACE

2017-07-26 Thread Volker Blum
Hi, I tried openmpi-3.0.0rc1.tar.gz using Intel Fortran 2017 and gcc on a current MacOS system. For this version, it seems to me that MPI_IN_PLACE returns incorrect results (while other MPI implementations, including some past OpenMPI versions, work fine). This can be seen with a simple