S E]
Enviado el: jue 02/07/2009 22:09
Para: SIESTA-L@listserv.uam.es
Asunto: Re: [SIESTA-L] Compile problem : redeclared MPI data types
Hello Javier,
Thanks for your suggestion. Yes, this does solve the compile problem. But
when the build process gets to the final step, the linker still
Beckman, Scott P [M S E]
Enviado el: jue 02/07/2009 22:09
Para: SIESTA-L@listserv.uam.es
Asunto: Re: [SIESTA-L] Compile problem : redeclared MPI data types
Hello Javier,
Thanks for your suggestion. Yes, this does solve the compile problem. But
when the build process gets to the final step
Sent: Thu 7/2/2009 3:55 AM
To: SIESTA-L@listserv.uam.es
Subject: Re: [SIESTA-L] Compile problem : redeclared MPI data types
Dear Scott:
Edit the mpi.F file in the Src/MPI subdirectory,
and include the USE MPI_INCLUDE,
inside the two blocks of the preprocessor.
The final lines should look like
Hello,
I¹m trying to compile Siesta 2.0.1 for parallel processing using MPI. My
system is an AMD Opteron cluster with InfiniPath interconnects. The
compiler is the PathScale compiler and I¹m using the ACML 3.0.0 libraries.
The BLACS and scalapack libraries are installed and tested.
When I try
There is some problem with how PathScale handles the modules in
Src/MPI/mpi.F
If I replace:
USE MPI__INCLUDE ,
#ifdef OLD_CRAY
DAT_single = MPI_real,
DAT_2single = MPI_2real,
5 matches
Mail list logo