Dear Thomas,
thank you again.

Symlink in /usr/lib64 is not enough, I have symlinked also
in /home/stefano/opt/mpi/openmpi/1.6.4/intel/lib and, as expected, not only
libimf.so but also ibirng.so and libintlc.so.5 are necessary.

Now also remote runs works, but this is only a workaround, I still not
understand why mpirun do not find intel library even if LD_LIBRARY_PATH
contains also  /home/stefano/opt/intel/2013.4.183/lib/intel64. Can you try
explain again?

Thank you very much.

Stefano Zaghi
Ph.D. Aerospace Engineer,
Research Scientist, Dept. of Computational Hydrodynamics at
*CNR-INSEAN*<http://www.insean.cnr.it/en/content/cnr-insean>

The Italian Ship Model Basin
(+39) 06.50299297 (Office)
My codes:
*OFF* <https://github.com/szaghi/OFF>, Open source Finite volumes Fluid
dynamics code
*Lib_VTK_IO* <https://github.com/szaghi/Lib_VTK_IO>, a Fortran library to
write and read data conforming the VTK standard
*IR_Precision* <https://github.com/szaghi/IR_Precision>, a Fortran
(standard 2003) module to develop portable codes


2013/6/21 <thomas.fo...@ulstein.com>

> your settings are as following:
> export MPI=/home/stefano/opt/mpi/openmpi/1.6.4/intel
> export PATH=${MPI}/bin:$PATH
> export LD_LIBRARY_PATH=${MPI}/lib/openmpi:${MPI}/lib:$LD_LIBRARY_PATH
> export LD_RUN_PATH=${MPI}/lib/openmpi:${MPI}/lib:$LD_RUN_PATH
>
> and your path to libimf.so file is
> /home/stefano/opt/intel/2013.4.183/lib/libimf.so
>
> your export LD_LIbrary_PATH if i can decude it right would be because you
> use the $MPI first.
>
> /home/stefano/opt/mpi/openmpi/1.64./intel/lib/openmpi and
> /home/stefano/opt/mpi/openmpi/1.64./intel/lib
>
> as you can see it doesnt look for the files int he right place.
>
> the simplest thing i would try is to symlink the libimf.so file to
> /usr/lib64 and should give you a workaround.
>
>
>
>
>
>
> From:        Stefano Zaghi <stefano.za...@gmail.com>
> To:        Open MPI Users <us...@open-mpi.org>,
> Date:        21.06.2013 09:45
> Subject:        Re: [OMPI users] OpenMPI 1.6.4 and Intel
> Composer_xe_2013.4.183: problem with remote runs, orted: error while
> loading shared libraries: libimf.so
> Sent by:        users-boun...@open-mpi.org
> ------------------------------
>
>
>
> Dear Thomas,
>
> thank you very much for your very fast replay.
>
> Yes I have that library in the correct place:
>
> -rwxr-xr-x 1 stefano users 3.0M May 20 14:22
> opt/intel/2013.4.183/lib/intel64/libimf.so
>
> Stefano Zaghi
> Ph.D. Aerospace Engineer,
> Research Scientist, Dept. of Computational Hydrodynamics at 
> *CNR-INSEAN*<http://www.insean.cnr.it/en/content/cnr-insean>
>
> The Italian Ship Model Basin
> (+39) 06.50299297 (Office)
> My codes:
> *OFF* <https://github.com/szaghi/OFF>, Open source Finite volumes Fluid
> dynamics code
> *Lib_VTK_IO* <https://github.com/szaghi/Lib_VTK_IO>, a Fortran library to
> write and read data conforming the VTK standard
> *IR_Precision* <https://github.com/szaghi/IR_Precision>, a Fortran
> (standard 2003) module to develop portable codes
>
>
> 2013/6/21 <*thomas.fo...@ulstein.com* <thomas.fo...@ulstein.com>>
> hi Stefano
>
> your error message show that you are missing a shared library, not
> necessary that library path is wrong.
>
> do you actually have libimf.so, can you find the file on your system.
>
> ./Thomas
>
>
>
>
> From:        Stefano Zaghi <*stefano.za...@gmail.com*<stefano.za...@gmail.com>
> >
> To:        *us...@open-mpi.org* <us...@open-mpi.org>,
> Date:        21.06.2013 09:27
> Subject:        [OMPI users] OpenMPI 1.6.4 and Intel
> Composer_xe_2013.4.183: problem with remote runs, orted: error while
> loading shared libraries: libimf.so
> Sent by:        *users-boun...@open-mpi.org* <users-boun...@open-mpi.org>
>  ------------------------------
>
>
>
>
> Dear All,
> I have compiled OpenMPI 1.6.4 with Intel Composer_xe_2013.4.183.
>
> My configure is:
>
> ./configure --prefix=/home/stefano/opt/mpi/openmpi/1.6.4/intel CC=icc
> CXX=icpc F77=ifort FC=ifort
>
> Intel Composer has been installed in:
>
> /home/stefano/opt/intel/2013.4.183/composer_xe_2013.4.183
>
> Into the .bashrc and .profile in all nodes there is:
>
> source /home/stefano/opt/intel/2013.4.183/bin/compilervars.sh intel64
> export MPI=/home/stefano/opt/mpi/openmpi/1.6.4/intel
> export PATH=${MPI}/bin:$PATH
> export LD_LIBRARY_PATH=${MPI}/lib/openmpi:${MPI}/lib:$LD_LIBRARY_PATH
> export LD_RUN_PATH=${MPI}/lib/openmpi:${MPI}/lib:$LD_RUN_PATH
>
> If I run parallel job into each single node (e.g. mpirun -np 8 myprog) all
> works well. However, when I tried to run parallel job in more nodes of the
> cluster (remote runs) like the following:
>
> mpirun -np 16 --bynode --machinefile nodi.txt -x LD_LIBRARY_PATH -x
> LD_RUN_PATH myprog
>
> I got the following error:
>
> /home/stefano/opt/mpi/openmpi/1.6.4/intel/bin/orted: error while loading
> shared libraries: libimf.so: cannot open shared object file: No such file
> or directory
>
> I have read many FAQs and online resources, all indicating LD_LIBRARY_PATH
> as the possible problem (wrong setting). However I am not able to figure
> out what is going wrong, the LD_LIBRARY_PATH seems to set right in all
> nodes.
>
> It is worth noting that in the same cluster I have successful installed
> OpenMPI 1.4.3 with Intel Composer_xe_2011_sp1.6.233 following exactly the
> same procedure.
>
> Thank you in advance for all suggestion,
> sincerely
>
> Stefano Zaghi
> Ph.D. Aerospace Engineer,
> Research Scientist, Dept. of Computational Hydrodynamics at 
> *CNR-INSEAN*<http://www.insean.cnr.it/en/content/cnr-insean>
>
> The Italian Ship Model Basin
> (+39) 06.50299297 (Office)
> My codes: *
> **OFF* <https://github.com/szaghi/OFF>, Open source Finite volumes Fluid
> dynamics code *
> **Lib_VTK_IO* <https://github.com/szaghi/Lib_VTK_IO>, a Fortran library
> to write and read data conforming the VTK standard
> *IR_Precision* <https://github.com/szaghi/IR_Precision>, a Fortran
> (standard 2003) module to develop portable codes
> _______________________________________________
> users mailing list*
> **us...@open-mpi.org* <us...@open-mpi.org>*
> **http://www.open-mpi.org/mailman/listinfo.cgi/users*<http://www.open-mpi.org/mailman/listinfo.cgi/users>
>
>
>
>
>
>
>
>  Denne e-posten kan innehalde informasjon som er konfidensiell
> og/eller underlagt lovbestemt teieplikt. Kun den tiltenkte adressat har
> adgang
> til å lese eller vidareformidle denne e-posten eller tilhøyrande vedlegg.
> Dersom De ikkje er den tiltenkte mottakar, vennligst kontakt avsendar pr
> e-post, slett denne e-posten med vedlegg og makuler samtlige utskrifter og
> kopiar av den.
>
> This e-mail may contain confidential information, or otherwise
> be protected against unauthorised use. Any disclosure, distribution or
> other use of the information by anyone but the intended recipient is
> strictly prohibited.
> If you have received this e-mail in error, please advise the sender by
> immediate reply and destroy the received documents and any copies hereof.
>
>
> PBefore
> printing, think about the environment
>
>
>
> _______________________________________________
> users mailing list*
> **us...@open-mpi.org* <us...@open-mpi.org>*
> **http://www.open-mpi.org/mailman/listinfo.cgi/users*<http://www.open-mpi.org/mailman/listinfo.cgi/users>
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>
>
>
>
>
>
>
>
>  Denne e-posten kan innehalde informasjon som er konfidensiell
> og/eller underlagt lovbestemt teieplikt. Kun den tiltenkte adressat har
> adgang
> til å lese eller vidareformidle denne e-posten eller tilhøyrande vedlegg.
> Dersom De ikkje er den tiltenkte mottakar, vennligst kontakt avsendar pr
> e-post, slett denne e-posten med vedlegg og makuler samtlige utskrifter og
> kopiar av den.
>
> This e-mail may contain confidential information, or otherwise
> be protected against unauthorised use. Any disclosure, distribution or
> other use of the information by anyone but the intended recipient is
> strictly prohibited.
> If you have received this e-mail in error, please advise the sender by
> immediate reply and destroy the received documents and any copies hereof.
>
>
> PBefore
> printing, think about the environment
>
>
>
>
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>

Reply via email to