On Wed, 2013-03-20 at 20:46 +0530, bedamani singh wrote:

Dear Bedamani,

> /state/partition1/home/Utpal/apps/intel/composer_xe_2013.2.146/composer_xe_2013.2.146/mpirt/bin/intel64/mpirun:
[...]
> FC=/usr/lib64/openmpi/bin/mpif90
[...]

This indicates that you are using different mpi implementations for
compilation (openmpi) and execution (intel mpi I suppose). You need to
make a choice and be consequent.

what compiler is executed by this mpif90 wrapper?

> BLAS_LIBS=/usr/lib/libblas.a
> LAPACK_LIBS=/usr/lib/liblapack.a

This indicates that you are using netlib blas/lapack, and the following:

> BLACS_LIBS=/export/home/Utpal/apps/intel/composer_xe_2013.2.146/mkl/lib/intel64/libmkl_blacs_openmpi_ilp64.a
> SCALAPACK_LIBS=/export/home/Utpal/apps/intel/composer_xe_2013.2.146/mkl/lib/intel64/libmkl_scalapack_ilp64.a

indicates that you link against mkl blacs and scalapack. Leaving aside
the efficiency of this solution - these implementations are not
compatible according to my knowledge. 

What I suggest is the following:
- use system MPI+libraries. system, administrators most likely made sure
to optimize them to the existing architecture. I guess that means
OpenMPI for your case.
- MKL supports OpenMPI, so you need to choose the relevant libraries -
it looks good in your arch.make for blacs and scalapack, but you also
need to use MKL's blas and LAPACK
- make sure you are using the libraries matching your compiler:
libmkl_intel_lp64 or libmkl_gf_lp64
- make sure that you don't mix ilp64 with lp64
- you already know that you should use openmpi for blacs rather than
intelmpi

Success!
Bartek

Responder a