Re: [OMPI users] Groups and Communicators

2017-07-28 Thread Diego Avesani
Dear George, Dear all, thanks, thanks a lot. I will tell you everything. I will try also to implement your suggestion. Unfortunately, the program that I have show to you is not working. I get the following error: [] *** An error occurred in MPI_Comm_rank [] *** reported by process [643497985,7]

Re: [OMPI users] Groups and Communicators

2017-07-28 Thread George Bosilca
I guess the second comm_rank call is invalid on all non-leader processes, as their LEADER_COMM communicator is MPI_COMM_NULL. george On Fri, Jul 28, 2017 at 05:06 Diego Avesani wrote: > Dear George, Dear all, > > thanks, thanks a lot. I will tell you everything. > I will try also to implement y

Re: [OMPI users] Groups and Communicators

2017-07-28 Thread Diego Avesani
Dear George, Dear all, here the code: PROGRAM TEST USE MPI IMPLICIT NONE ! mpif90 -r8 *.f90 ! INTEGER :: rank INTEGER :: subrank,leader_rank INTEGER :: nCPU INTEGER :: subnCPU INTEGER :: ierror INTEGER :: tag

[OMPI users] Memory leak in Open MPI 2.1.1

2017-07-28 Thread McGrattan, Kevin B. Dr. (Fed)
I am using Open MPI 2.1.1 along with Intel Fortran 17 update 4 and I am experiencing what I think is a memory leak with a job that uses 184 MPI processes. The memory used per process appears to be increasing by about 1 to 2 percent per hour. My code uses mostly persistent sends and receives to e

Re: [OMPI users] Groups and Communicators

2017-07-28 Thread Diego Avesani
Dear George, Dear all, I have just rewritten the code to make it more clear: * INTEGER :: colorl,colorglobal* * INTEGER :: LOCAL_COMM,MASTER_COMM* * !* * !---* * ! create WORLD comm