Can you reproduce the error in a small example?

Also, try using "use mpi" instead of "include 'mpif.h'", and see if that turns 
up any errors.


> On Sep 2, 2015, at 12:13 PM, Diego Avesani <diego.aves...@gmail.com> wrote:
> 
> Dear Gilles, Dear all,
> I have found the error. Some CPU has no element to share. It was a my error.
> 
> Now I have another one:
> 
> Fatal error in MPI_Isend: Invalid communicator, error stack:
> MPI_Isend(158): MPI_Isend(buf=0x137b7b4, count=1, INVALID DATATYPE, dest=0, 
> tag=0, comm=0x0, request=0x7fffe8726fc0) failed
> 
> In this case with MPI does not work, with openMPI it works.
> 
> Could you see some particular information from the error message?
> 
> Diego
> 
> 
> Diego
> 
> 
> On 2 September 2015 at 14:52, Gilles Gouaillardet 
> <gilles.gouaillar...@gmail.com> wrote:
> Diego,
> 
> about MPI_Allreduce, you should use MPI_IN_PLACE if you want the same buffer 
> in send and recv
> 
> about the stack, I notice comm is NULL which is a bit surprising...
> at first glance, type creation looks good.
> that being said, you do not check MPIdata%iErr is MPI_SUCCESS after each MPI 
> call.
> I recommend you first do this, so you can catch the error as soon it happens, 
> and hopefully understand why it occurs.
> 
> Cheers,
> 
> Gilles
> 
> 
> On Wednesday, September 2, 2015, Diego Avesani <diego.aves...@gmail.com> 
> wrote:
> Dear all,
> 
> I have notice small difference between OPEN-MPI and intel MPI. 
> For example in MPI_ALLREDUCE in intel MPI is not allowed to use the same 
> variable in send and receiving Buff.
> 
> I have written my code in OPEN-MPI, but unfortunately I have to run in on a 
> intel-MPI cluster. 
> Now I have the following error:
> 
> atal error in MPI_Isend: Invalid communicator, error stack:
> MPI_Isend(158): MPI_Isend(buf=0x1dd27b0, count=1, INVALID DATATYPE, dest=0, 
> tag=0, comm=0x0, request=0x7fff9d7dd9f0) failed
> 
> 
> This is ho I create my type:
> 
>   CALL  MPI_TYPE_VECTOR(1, Ncoeff_MLS, Ncoeff_MLS, MPI_DOUBLE_PRECISION, 
> coltype, MPIdata%iErr) 
>   CALL  MPI_TYPE_COMMIT(coltype, MPIdata%iErr)
>   !
>   CALL  MPI_TYPE_VECTOR(1, nVar, nVar, coltype, MPI_WENO_TYPE, MPIdata%iErr) 
>   CALL  MPI_TYPE_COMMIT(MPI_WENO_TYPE, MPIdata%iErr)
> 
> 
> do you believe that is here the problem?
> Is also this the way how intel MPI create a datatype?
> 
> maybe I could also ask to intel MPI users
> What do you think?
> 
> Diego
> 
> 
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
> Link to this post: 
> http://www.open-mpi.org/community/lists/users/2015/09/27523.php
> 
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
> Link to this post: 
> http://www.open-mpi.org/community/lists/users/2015/09/27524.php


-- 
Jeff Squyres
jsquy...@cisco.com
For corporate legal information go to: 
http://www.cisco.com/web/about/doing_business/legal/cri/

Reply via email to