[OMPI users] difference between OPENMPI e Intel MPI (DATATYPE)

2015-09-02 Thread Diego Avesani
Dear all, I have notice small difference between OPEN-MPI and intel MPI. For example in MPI_ALLREDUCE in intel MPI is not allowed to use the same variable in send and receiving Buff. I have written my code in OPEN-MPI, but unfortunately I have to run in on a intel-MPI cluster. Now I have the foll

Re: [OMPI users] difference between OPENMPI e Intel MPI (DATATYPE)

2015-09-02 Thread Gilles Gouaillardet
Diego, about MPI_Allreduce, you should use MPI_IN_PLACE if you want the same buffer in send and recv about the stack, I notice comm is NULL which is a bit surprising... at first glance, type creation looks good. that being said, you do not check MPIdata%iErr is MPI_SUCCESS after each MPI call. I

Re: [OMPI users] difference between OPENMPI e Intel MPI (DATATYPE)

2015-09-02 Thread Diego Avesani
Dear Gilles, Dear all, I have found the error. Some CPU has no element to share. It was a my error. Now I have another one: *Fatal error in MPI_Isend: Invalid communicator, error stack:* *MPI_Isend(158): MPI_Isend(buf=0x137b7b4, count=1, INVALID DATATYPE, dest=0, tag=0, comm=0x0, request=0x7fffe8

Re: [OMPI users] difference between OPENMPI e Intel MPI (DATATYPE)

2015-09-02 Thread Jeff Squyres (jsquyres)
Can you reproduce the error in a small example? Also, try using "use mpi" instead of "include 'mpif.h'", and see if that turns up any errors. > On Sep 2, 2015, at 12:13 PM, Diego Avesani wrote: > > Dear Gilles, Dear all, > I have found the error. Some CPU has no element to share. It was a my

[OMPI users] tracking down what's causing a cuIpcOpenMemHandle error emitted by OpenMPI

2015-09-02 Thread Lev Givon
I recently noticed the following error when running a Python program I'm developing that repeatedly performs GPU-to-GPU data transfers via OpenMPI: The call to cuIpcGetMemHandle failed. This means the GPU RDMA protocol cannot be used. cuIpcGetMemHandle return value: 1 address: 0x602e75000 Ch