You still havent shown us anything about what goes wrong, you just give us
the error statement and assume it is because of ill-defined type-creation,
it might as well be because you call allreduce erroneously.
Please give us more information...

2015-09-03 14:59 GMT+00:00 Diego Avesani <diego.aves...@gmail.com>:

> Dear Nick, Dear all,
>
> I use mpi.
>
> I  recompile everything, every time.
>
> I do not understand what I shall do.
>
> Thanks again
>
> Diego
>
> Diego
>
>
> On 3 September 2015 at 16:52, Nick Papior <nickpap...@gmail.com> wrote:
>
>> When you change environment, that is change between OpenMPI and Intel
>> MPI, or compiler, it is recommended that you recompile everything.
>>
>> use mpi
>>
>> is a module, you cannot mix these between compilers/environments, sadly
>> the Fortran specification does not enforce a strict module format which is
>> why this is necessary.
>>
>>
>>
>> 2015-09-03 14:43 GMT+00:00 Diego Avesani <diego.aves...@gmail.com>:
>>
>>> Dear Jeff, Dear all,
>>> I normaly use "USE MPI"
>>>
>>> This is the answar fro intel HPC forum:
>>>
>>> *If you are switching between intel and openmpi you must remember not to
>>> mix environment.  You might use modules to manage this.  As the data types
>>> encodings differ, you must take care that all objects are built against the
>>> same headers.*
>>>
>>> Could someone explain me what are these modules and how I can use them?
>>>
>>> Thanks
>>>
>>> Diego
>>>
>>> Diego
>>>
>>>
>>> On 2 September 2015 at 19:07, Jeff Squyres (jsquyres) <
>>> jsquy...@cisco.com> wrote:
>>>
>>>> Can you reproduce the error in a small example?
>>>>
>>>> Also, try using "use mpi" instead of "include 'mpif.h'", and see if
>>>> that turns up any errors.
>>>>
>>>>
>>>> > On Sep 2, 2015, at 12:13 PM, Diego Avesani <diego.aves...@gmail.com>
>>>> wrote:
>>>> >
>>>> > Dear Gilles, Dear all,
>>>> > I have found the error. Some CPU has no element to share. It was a my
>>>> error.
>>>> >
>>>> > Now I have another one:
>>>> >
>>>> > Fatal error in MPI_Isend: Invalid communicator, error stack:
>>>> > MPI_Isend(158): MPI_Isend(buf=0x137b7b4, count=1, INVALID DATATYPE,
>>>> dest=0, tag=0, comm=0x0, request=0x7fffe8726fc0) failed
>>>> >
>>>> > In this case with MPI does not work, with openMPI it works.
>>>> >
>>>> > Could you see some particular information from the error message?
>>>> >
>>>> > Diego
>>>> >
>>>> >
>>>> > Diego
>>>> >
>>>> >
>>>> > On 2 September 2015 at 14:52, Gilles Gouaillardet <
>>>> gilles.gouaillar...@gmail.com> wrote:
>>>> > Diego,
>>>> >
>>>> > about MPI_Allreduce, you should use MPI_IN_PLACE if you want the same
>>>> buffer in send and recv
>>>> >
>>>> > about the stack, I notice comm is NULL which is a bit surprising...
>>>> > at first glance, type creation looks good.
>>>> > that being said, you do not check MPIdata%iErr is MPI_SUCCESS after
>>>> each MPI call.
>>>> > I recommend you first do this, so you can catch the error as soon it
>>>> happens, and hopefully understand why it occurs.
>>>> >
>>>> > Cheers,
>>>> >
>>>> > Gilles
>>>> >
>>>> >
>>>> > On Wednesday, September 2, 2015, Diego Avesani <
>>>> diego.aves...@gmail.com> wrote:
>>>> > Dear all,
>>>> >
>>>> > I have notice small difference between OPEN-MPI and intel MPI.
>>>> > For example in MPI_ALLREDUCE in intel MPI is not allowed to use the
>>>> same variable in send and receiving Buff.
>>>> >
>>>> > I have written my code in OPEN-MPI, but unfortunately I have to run
>>>> in on a intel-MPI cluster.
>>>> > Now I have the following error:
>>>> >
>>>> > atal error in MPI_Isend: Invalid communicator, error stack:
>>>> > MPI_Isend(158): MPI_Isend(buf=0x1dd27b0, count=1, INVALID DATATYPE,
>>>> dest=0, tag=0, comm=0x0, request=0x7fff9d7dd9f0) failed
>>>> >
>>>> >
>>>> > This is ho I create my type:
>>>> >
>>>> >   CALL  MPI_TYPE_VECTOR(1, Ncoeff_MLS, Ncoeff_MLS,
>>>> MPI_DOUBLE_PRECISION, coltype, MPIdata%iErr)
>>>> >   CALL  MPI_TYPE_COMMIT(coltype, MPIdata%iErr)
>>>> >   !
>>>> >   CALL  MPI_TYPE_VECTOR(1, nVar, nVar, coltype, MPI_WENO_TYPE,
>>>> MPIdata%iErr)
>>>> >   CALL  MPI_TYPE_COMMIT(MPI_WENO_TYPE, MPIdata%iErr)
>>>> >
>>>> >
>>>> > do you believe that is here the problem?
>>>> > Is also this the way how intel MPI create a datatype?
>>>> >
>>>> > maybe I could also ask to intel MPI users
>>>> > What do you think?
>>>> >
>>>> > Diego
>>>> >
>>>> >
>>>> > _______________________________________________
>>>> > users mailing list
>>>> > us...@open-mpi.org
>>>> > Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
>>>> > Link to this post:
>>>> http://www.open-mpi.org/community/lists/users/2015/09/27523.php
>>>> >
>>>> > _______________________________________________
>>>> > users mailing list
>>>> > us...@open-mpi.org
>>>> > Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
>>>> > Link to this post:
>>>> http://www.open-mpi.org/community/lists/users/2015/09/27524.php
>>>>
>>>>
>>>> --
>>>> Jeff Squyres
>>>> jsquy...@cisco.com
>>>> For corporate legal information go to:
>>>> http://www.cisco.com/web/about/doing_business/legal/cri/
>>>>
>>>> _______________________________________________
>>>> users mailing list
>>>> us...@open-mpi.org
>>>> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
>>>> Link to this post:
>>>> http://www.open-mpi.org/community/lists/users/2015/09/27525.php
>>>
>>>
>>>
>>> _______________________________________________
>>> users mailing list
>>> us...@open-mpi.org
>>> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
>>> Link to this post:
>>> http://www.open-mpi.org/community/lists/users/2015/09/27527.php
>>>
>>
>>
>>
>> --
>> Kind regards Nick
>>
>> _______________________________________________
>> users mailing list
>> us...@open-mpi.org
>> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
>> Link to this post:
>> http://www.open-mpi.org/community/lists/users/2015/09/27528.php
>>
>
>
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
> Link to this post:
> http://www.open-mpi.org/community/lists/users/2015/09/27531.php
>



-- 
Kind regards Nick

Reply via email to