Dear All,
I have performed some tests and I finally run successfully mpiexec without
simlinks. As Thomas said my error was the LD_LIBRARY_PATH setting. The
correct setup is the following:

source /home/stefano/opt/intel/2013.4.183/bin/compilervars.sh intel64
export MPI=/home/stefano/opt/mpi/openmpi/1.6.4/intel
export PATH=${MPI}/bin:$PATH
export LD_LIBRARY_PATH=*/home/stefano/opt/intel/2013.4.183/lib*
:${MPI}/lib/openmpi:${MPI}/lib:$LD_LIBRARY_PATH
export LD_RUN_PATH=${MPI}/lib/openmpi:${MPI}/lib:$LD_RUN_PATH

Using the above setting mpiexec (orted) finds all its shared library also
with remote node runs. My previous setups were wrong because:

1) in the first test I have forgot
*/home/stefano/opt/intel/2013.4.183/lib*in the LD_LIBRARY_PATH;
2) in the second test I have used *
/home/stefano/opt/intel/2013.4.183/lib/intel64* in the LD_LIBRARY_PATH.

It seems that the source of *compilervars.sh* does not set the correct
LD_LIBRARY_PATH.

Thanks you for all suggestions,
sincerely


Stefano Zaghi
Ph.D. Aerospace Engineer,
Research Scientist, Dept. of Computational Hydrodynamics at
*CNR-INSEAN*<http://www.insean.cnr.it/en/content/cnr-insean>

The Italian Ship Model Basin
(+39) 06.50299297 (Office)
My codes:
*OFF* <https://github.com/szaghi/OFF>, Open source Finite volumes Fluid
dynamics code
*Lib_VTK_IO* <https://github.com/szaghi/Lib_VTK_IO>, a Fortran library to
write and read data conforming the VTK standard
*IR_Precision* <https://github.com/szaghi/IR_Precision>, a Fortran
(standard 2003) module to develop portable codes


2013/6/21 Stefano Zaghi <stefano.za...@gmail.com>

> Dear All,
> I have compiled OpenMPI 1.6.4 with Intel Composer_xe_2013.4.183.
>
> My configure is:
>
> ./configure --prefix=/home/stefano/opt/mpi/openmpi/1.6.4/intel CC=icc
> CXX=icpc F77=ifort FC=ifort
>
> Intel Composer has been installed in:
>
> /home/stefano/opt/intel/2013.4.183/composer_xe_2013.4.183
>
> Into the .bashrc and .profile in all nodes there is:
>
> source /home/stefano/opt/intel/2013.4.183/bin/compilervars.sh intel64
> export MPI=/home/stefano/opt/mpi/openmpi/1.6.4/intel
> export PATH=${MPI}/bin:$PATH
> export LD_LIBRARY_PATH=${MPI}/lib/openmpi:${MPI}/lib:$LD_LIBRARY_PATH
> export LD_RUN_PATH=${MPI}/lib/openmpi:${MPI}/lib:$LD_RUN_PATH
>
> If I run parallel job into each single node (e.g. mpirun -np 8 myprog) all
> works well. However, when I tried to run parallel job in more nodes of the
> cluster (remote runs) like the following:
>
> mpirun -np 16 --bynode --machinefile nodi.txt -x LD_LIBRARY_PATH -x
> LD_RUN_PATH myprog
>
> I got the following error:
>
> /home/stefano/opt/mpi/openmpi/1.6.4/intel/bin/orted: error while loading
> shared libraries: libimf.so: cannot open shared object file: No such file
> or directory
>
> I have read many FAQs and online resources, all indicating LD_LIBRARY_PATH
> as the possible problem (wrong setting). However I am not able to figure
> out what is going wrong, the LD_LIBRARY_PATH seems to set right in all
> nodes.
>
> It is worth noting that in the same cluster I have successful installed
> OpenMPI 1.4.3 with Intel Composer_xe_2011_sp1.6.233 following exactly the
> same procedure.
>
> Thank you in advance for all suggestion,
> sincerely
>
> Stefano Zaghi
> Ph.D. Aerospace Engineer,
> Research Scientist, Dept. of Computational Hydrodynamics at 
> *CNR-INSEAN*<http://www.insean.cnr.it/en/content/cnr-insean>
>
> The Italian Ship Model Basin
> (+39) 06.50299297 (Office)
> My codes:
> *OFF* <https://github.com/szaghi/OFF>, Open source Finite volumes Fluid
> dynamics code
> *Lib_VTK_IO* <https://github.com/szaghi/Lib_VTK_IO>, a Fortran library to
> write and read data conforming the VTK standard
> *IR_Precision* <https://github.com/szaghi/IR_Precision>, a Fortran
> (standard 2003) module to develop portable codes
>

Reply via email to