Dear all,
let's me check all your mail, because there are a lot of thing that I can
not understand.
As soon as possible I will reply, hopefully.
Diego
On 3 September 2015 at 17:23, Bennet Fauber wrote:
> There is also the package Lmod, which provides similar functionality
There is also the package Lmod, which provides similar functionality
to environment modules. It is maintained by TACC.
https://www.tacc.utexas.edu/research-development/tacc-projects/lmod
but I think the current source code is at
https://github.com/TACC/Lmod
-- bennet
On Thu, Sep 3, 2015 at
On Sep 3, 2015, at 10:43 AM, Diego Avesani wrote:
>
> Dear Jeff, Dear all,
> I normaly use "USE MPI"
>
> This is the answar fro intel HPC forum:
>
> If you are switching between intel and openmpi you must remember not to mix
> environment. You might use modules to
Dear all, Dear Nick,
you are right.
now will try to erase every time all *.mod file and *.o file. After thar
recompile all *.f90 files.
If I get another error I will tell you also the message.
Thanks again
Diego
Diego
On 3 September 2015 at 17:03, Nick Papior wrote:
Hi Diego,
I think the Intel HPC forum comment is about using environment modules to
manage your environment (PATH, LD_LIBRARY_PATH variables).
Most HPC systems use environment modules:
- Tcl ( http://modules.cvs.sourceforge.net/viewvc/modules/modules/tcl/ )
- C/Tcl (
Diego,
did you update your code to check all MPI calls are successful ?
(e.g. test ierr is MPI_SUCCES after each MPI call)
can you write a short program that reproduce the same issue ?
if not, is your program and input data public ally available ?
Cheers,
Gilles
On Thursday, September 3,
Hello,
On 09/03/15 16:52, Nick Papior wrote:
When you change environment, that is change between OpenMPI and Intel MPI, or
compiler, it is recommended that you recompile everything.
use mpi
is a module, you cannot mix these between compilers/environments, sadly the
Fortran specification does
You still havent shown us anything about what goes wrong, you just give us
the error statement and assume it is because of ill-defined type-creation,
it might as well be because you call allreduce erroneously.
Please give us more information...
2015-09-03 14:59 GMT+00:00 Diego Avesani
Dear Nick, Dear all,
I use mpi.
I recompile everything, every time.
I do not understand what I shall do.
Thanks again
Diego
Diego
On 3 September 2015 at 16:52, Nick Papior wrote:
> When you change environment, that is change between OpenMPI and Intel MPI,
> or
Diego,
basically that means "do not build with openmpi and run with intelmpi, or
the other way around" and/or "do not build a part of your app with openmpi
and an other part with intelmpi"
"part" can be your app or the use of third party libraries.
if you use intel scalapack, make you use the lib
When you change environment, that is change between OpenMPI and Intel MPI,
or compiler, it is recommended that you recompile everything.
use mpi
is a module, you cannot mix these between compilers/environments, sadly the
Fortran specification does not enforce a strict module format which is why
Dear Jeff, Dear all,
I normaly use "USE MPI"
This is the answar fro intel HPC forum:
*If you are switching between intel and openmpi you must remember not to
mix environment. You might use modules to manage this. As the data types
encodings differ, you must take care that all objects are built
Can you reproduce the error in a small example?
Also, try using "use mpi" instead of "include 'mpif.h'", and see if that turns
up any errors.
> On Sep 2, 2015, at 12:13 PM, Diego Avesani wrote:
>
> Dear Gilles, Dear all,
> I have found the error. Some CPU has no
Dear Gilles, Dear all,
I have found the error. Some CPU has no element to share. It was a my error.
Now I have another one:
*Fatal error in MPI_Isend: Invalid communicator, error stack:*
*MPI_Isend(158): MPI_Isend(buf=0x137b7b4, count=1, INVALID DATATYPE,
dest=0, tag=0, comm=0x0,
Diego,
about MPI_Allreduce, you should use MPI_IN_PLACE if you want the same
buffer in send and recv
about the stack, I notice comm is NULL which is a bit surprising...
at first glance, type creation looks good.
that being said, you do not check MPIdata%iErr is MPI_SUCCESS after each
MPI call.
I
Dear all,
I have notice small difference between OPEN-MPI and intel MPI.
For example in MPI_ALLREDUCE in intel MPI is not allowed to use the same
variable in send and receiving Buff.
I have written my code in OPEN-MPI, but unfortunately I have to run in on a
intel-MPI cluster.
Now I have the
16 matches
Mail list logo