Hi Stefano

That doesn't seem to be an Open MPI problem,
but an Intel environment problem.

Here I have this library directory (on a slightly older version):

 .../composerxe/2013.3.163/compiler/lib/intel64/

but Intel keeps changing their directory structure,
playing with bunches of soft links, etc,
(which is quite annoying),
so your newer version may not have the same directories.

However, this should be all properly set if you run the
Intel "source compilervars.[sh,csh] intel64", as Jeff noted.
I read on your first email that you do that.
You don't need/shouldn't to change that by hand,
unless Intel really messed up their own environment settings
on their compilervars script.

Did you do "ldd your_executable", to see if it finds libimf.so ?

I hope this helps,
Gus Correa

On 06/21/2013 02:05 PM, Stefano Zaghi wrote:
Hi Gus,
thank you for your replay.

The strange path I have chosen is because this was only a test. However
my home dir is shared on all nodes and the lib dir is not a simple
simlink. I think that Thomas is right, I have to remove intel64 from
Intel/lib path. Monday I will try.

Thank you again.

Il giorno 21/giu/2013 17:55, "Gus Correa" <g...@ldeo.columbia.edu
<mailto:g...@ldeo.columbia.edu>> ha scritto:

    Hi Stefano

    Make sure your Intel compiler's shared libraries
    are accessible on all nodes.

    Is your /home directory shared across all nodes?
    How about /opt (if Intel is installed there)?

    By default Intel installs the compilers on /opt, which in typical
    clusters (and Linux distributions) is a local directory (to each node),
    not shared via NFS.
    Although you seem to have installed it somewhere else,
    /home/stefano/opt maybe, if /home/stefano/opt
    is just a soft link to /opt, not a real directory,
    that may not to do the trick across the cluster network.

    This error:

     >>         /home/stefano/opt/mpi/openmpi/__1.6.4/intel/bin/orted: error
     >>         while loading shared libraries: libimf.so: cannot open
    shared
     >>         object file: No such file or directory

    suggests something like that is going on (libimf.so is an
    *Intel shared library*, it is *not an Open MPI libary*).


    To have all needed tools (OpenMPI and Intel)
    available on all nodes, there are two typical solutions
    (by the way, see this FAQ:
    http://www.open-mpi.org/faq/?__category=building#where-to-__install
    <http://www.open-mpi.org/faq/?category=building#where-to-install>):

    1) Install them on all nodes, via RPM, or configure/make/install, or
    other mechanism.
    This is time consuming and costly to maintain, but scales well
    in big or small clusters.

    2)  Install them on your master/head/adminsitration/__storage node,
    and and share them via network (typicaly via NFS export/mount).
    This is easy to maintain, and scales well in small/medium clusters,
    but not so much on big ones.

    Make sure the Intel and MPI directories are either shared by
    or present/installed on all nodes.

    I also wonder if you really need these many environment variables:

     >> LD_LIBRARY_PATH=${MPI}/lib/__openmpi:${MPI}/lib:$LD___LIBRARY_PATH
     >> export LD_RUN_PATH=${MPI}/lib/__openmpi:${MPI}/lib:$LD_RUN___PATH

    or if that may be actually replaced by the simpler form:

     >> LD_LIBRARY_PATH=${MPI}/lib:$__LD_LIBRARY_PATH

    I hope it helps,
    Gus Correa



    On 06/21/2013 04:35 AM, Stefano Zaghi wrote:

        Wow... I think you are right... I will am check after the job I have
        just started will finish.

        Thank you again.

        See you soon

        Stefano Zaghi
        Ph.D. Aerospace Engineer,
        Research Scientist, Dept. of Computational Hydrodynamics at
        *CNR-INSEAN*
        <http://www.insean.cnr.it/en/__content/cnr-insean
        <http://www.insean.cnr.it/en/content/cnr-insean>>
        The Italian Ship Model Basin
        (+39) 06.50299297 <tel:%28%2B39%29%2006.50299297> (Office)
        My codes:
        *OFF* <https://github.com/szaghi/OFF__>, Open source Finite
        volumes Fluid
        dynamics code
        *Lib_VTK_IO* <https://github.com/szaghi/__Lib_VTK_IO
        <https://github.com/szaghi/Lib_VTK_IO>>, a Fortran library
        to write and read data conforming the VTK standard
        *IR_Precision* <https://github.com/szaghi/IR___Precision
        <https://github.com/szaghi/IR_Precision>>, a Fortran
        (standard 2003) module to develop portable codes


        2013/6/21 <thomas.fo...@ulstein.com
        <mailto:thomas.fo...@ulstein.com>
        <mailto:thomas.forde@ulstein.__com
        <mailto:thomas.fo...@ulstein.com>>>

             hi Stefano

             /home/stefano/opt/intel/2013.__4.183/lib/intel64/ is also
        the wrong
             path, as the file is in ..183/lib/ and not ...183/lib/intel64/

             is that why?
             ./Thomas


             Den 21. juni 2013 kl. 10:26 skrev "Stefano Zaghi"
        <stefano.za...@gmail.com <mailto:stefano.za...@gmail.com>
        <mailto:stefano.zaghi@gmail.__com
        <mailto:stefano.za...@gmail.com>>>:

                 Dear Thomas,
                 thank you again.

                 Symlink in /usr/lib64 is not enough, I have symlinked also
                 in /home/stefano/opt/mpi/openmpi/__1.6.4/intel/lib and,
            as expected,
                 not only libimf.so but also ibirng.so and libintlc.so.5
            are necessary.

                 Now also remote runs works, but this is only a
            workaround, I still
                 not understand why mpirun do not find intel library even if
                 LD_LIBRARY_PATH contains also
                  /home/stefano/opt/intel/2013.__4.183/lib/intel64. Can
            you try
                 explain again?

                 Thank you very much.

                 Stefano Zaghi
                 Ph.D. Aerospace Engineer,
                 Research Scientist, Dept. of Computational Hydrodynamics at
                 *CNR-INSEAN*
            <http://www.insean.cnr.it/en/__content/cnr-insean
            <http://www.insean.cnr.it/en/content/cnr-insean>>
                 The Italian Ship Model Basin
            (+39) 06.50299297 <tel:%28%2B39%29%2006.50299297> (Office)
                 My codes:
                 *OFF* <https://github.com/szaghi/OFF__>, Open source
            Finite volumes
                 Fluid dynamics code
                 *Lib_VTK_IO* <https://github.com/szaghi/__Lib_VTK_IO
            <https://github.com/szaghi/Lib_VTK_IO>>, a  would
                 Fortran library to write and read data conforming the
            VTK standard
                 *IR_Precision*
            <https://github.com/szaghi/IR___Precision
            <https://github.com/szaghi/IR_Precision>>, a Fortran
                 (standard 2003) module to develop portable codes


                 2013/6/21 <thomas.fo...@ulstein.com
            <mailto:thomas.fo...@ulstein.com>
            <mailto:thomas.forde@ulstein.__com
            <mailto:thomas.fo...@ulstein.com>>>

                     your settings are as following:
                     export MPI=/home/stefano/opt/mpi/__openmpi/1.6.4/intel
                     export PATH=${MPI}/bin:$PATH
                     export

            LD_LIBRARY_PATH=${MPI}/lib/__openmpi:${MPI}/lib:$LD___LIBRARY_PATH
                     export
            LD_RUN_PATH=${MPI}/lib/__openmpi:${MPI}/lib:$LD_RUN___PATH

                     and your path to libimf.so file is
                     /home/stefano/opt/intel/2013.__4.183/lib/libimf.so

                     your export LD_LIbrary_PATH if i can decude it
            right would be
                     because you use the $MPI first.


            /home/stefano/opt/mpi/openmpi/__1.64./intel/lib/openmpi and
                     /home/stefano/opt/mpi/openmpi/__1.64./intel/lib

                     as you can see it doesnt look for the files int he
            right place.

                     the simplest thing i would try is to symlink the
            libimf.so
                     file to /usr/lib64 and should give you a workaround.






                     From: Stefano Zaghi <stefano.za...@gmail.com
            <mailto:stefano.za...@gmail.com>
            <mailto:stefano.zaghi@gmail.__com
            <mailto:stefano.za...@gmail.com>>>
                     To: Open MPI Users <us...@open-mpi.org
            <mailto:us...@open-mpi.org>
            <mailto:us...@open-mpi.org <mailto:us...@open-mpi.org>>>,
                     Date: 21.06.2013 09 <tel:21.06.2013%2009>:45
                     Subject: Re: [OMPI users] OpenMPI 1.6.4 and Intel
                     Composer_xe_2013.4.183: problem with remote runs,
            orted: error
                     while loading shared libraries: libimf.so
                     Sent by: users-boun...@open-mpi.org
            <mailto:users-boun...@open-mpi.org>
            <mailto:users-bounces@open-__mpi.org
            <mailto:users-boun...@open-mpi.org>>

            
------------------------------__------------------------------__------------



                     Dear Thomas,

                     thank you very much for your very fast replay.

                     Yes I have that library in the correct place:

                     -rwxr-xr-x 1 stefano users 3.0M May 20 14:22
                     opt/intel/2013.4.183/lib/__intel64/libimf.so

                     Stefano Zaghi
                     Ph.D. Aerospace Engineer,
                     Research Scientist, Dept. of Computational
            Hydrodynamics at
                     *_CNR-INSEAN_*
            <http://www.insean.cnr.it/en/__content/cnr-insean
            <http://www.insean.cnr.it/en/content/cnr-insean>>
                     The Italian Ship Model Basin
            (+39) 06.50299297 <tel:%28%2B39%29%2006.50299297> (Office)
                     My codes:
                     *_OFF_* <https://github.com/szaghi/OFF__>, Open
            source Finite
                     volumes Fluid dynamics code
                     *_Lib_VTK_IO_*
            <https://github.com/szaghi/__Lib_VTK_IO
            <https://github.com/szaghi/Lib_VTK_IO>>, a
                     Fortran library to write and read data conforming
            the VTK standard
                     *_IR_Precision_*
            <https://github.com/szaghi/IR___Precision
            <https://github.com/szaghi/IR_Precision>>, a
                     Fortran (standard 2003) module to develop portable
            codes


                     2013/6/21 <_thomas.forde@ulstein.com_
            <mailto:thomas.forde@ulstein.__com
            <mailto:thomas.fo...@ulstein.com>>>
                     hi Stefano

                     your error message show that you are missing a
            shared library,
                     not necessary that library path is wrong.

                     do you actually have libimf.so, can you find the
            file on your
                     system.

                     ./Thomas




                     From: Stefano Zaghi <_stefano.zaghi@gmail.com_
            <mailto:stefano.zaghi@gmail.__com
            <mailto:stefano.za...@gmail.com>>>
                     To: _users@open-mpi.org_ <mailto:us...@open-mpi.org
            <mailto:us...@open-mpi.org>>,
                     Date: 21.06.2013 09 <tel:21.06.2013%2009>:27
                     Subject: [OMPI users] OpenMPI 1.6.4 and Intel
                     Composer_xe_2013.4.183: problem with remote runs,
            orted: error
                     while loading shared libraries: libimf.so
                     Sent by: _users-bounces@open-mpi.org_
            <mailto:users-bounces@open-__mpi.org
            <mailto:users-boun...@open-mpi.org>>

            
------------------------------__------------------------------__------------




                     Dear All,
                     I have compiled OpenMPI 1.6.4 with Intel
            Composer_xe_2013.4.183.

                     My configure is:

                     ./configure
            --prefix=/home/stefano/opt/__mpi/openmpi/1.6.4/intel
                     CC=icc CXX=icpc F77=ifort FC=ifort

                     Intel Composer has been installed in:


            /home/stefano/opt/intel/2013.__4.183/composer_xe_2013.4.183

                     Into the .bashrc and .profile in all nodes there is:

                     source
            /home/stefano/opt/intel/2013.__4.183/bin/compilervars.sh
                     intel64
                     export MPI=/home/stefano/opt/mpi/__openmpi/1.6.4/intel
                     export PATH=${MPI}/bin:$PATH
                     export

            LD_LIBRARY_PATH=${MPI}/lib/__openmpi:${MPI}/lib:$LD___LIBRARY_PATH
                     export
            LD_RUN_PATH=${MPI}/lib/__openmpi:${MPI}/lib:$LD_RUN___PATH

                     If I run parallel job into each single node (e.g.
            mpirun -np 8
                     myprog) all works well. However, when I tried to
            run parallel
                     job in more nodes of the cluster (remote runs) like the
                     following:

                     mpirun -np 16 --bynode --machinefile nodi.txt -x
                     LD_LIBRARY_PATH -x LD_RUN_PATH myprog

                     I got the following error:


            /home/stefano/opt/mpi/openmpi/__1.6.4/intel/bin/orted: error
                     while loading shared libraries: libimf.so: cannot
            open shared
                     object file: No such file or directory

                     I have read many FAQs and online resources, all
            indicating
                     LD_LIBRARY_PATH as the possible problem (wrong
            setting).
                     However I am not able to figure out what is going
            wrong, the
                     LD_LIBRARY_PATH seems to set right in all nodes.

                     It is worth noting that in the same cluster I have
            successful
                     installed OpenMPI 1.4.3 with Intel
            Composer_xe_2011_sp1.6.233
                     following exactly the same procedure.

                     Thank you in advance for all suggestion,
                     sincerely

                     Stefano Zaghi
                     Ph.D. Aerospace Engineer,
                     Research Scientist, Dept. of Computational
            Hydrodynamics at
                     *_CNR-INSEAN_*
            <http://www.insean.cnr.it/en/__content/cnr-insean
            <http://www.insean.cnr.it/en/content/cnr-insean>>
                     The Italian Ship Model Basin
            (+39) 06.50299297 <tel:%28%2B39%29%2006.50299297> (Office)
                     My codes: _
                     _*_OFF_* <https://github.com/szaghi/OFF__>, Open
            source Finite
                     volumes Fluid dynamics code _
                     _*_Lib_VTK_IO_*
            <https://github.com/szaghi/__Lib_VTK_IO
            <https://github.com/szaghi/Lib_VTK_IO>>, a
                     Fortran library to write and read data conforming
            the VTK
                     standard
                     *_IR_Precision_*
            <https://github.com/szaghi/IR___Precision
            <https://github.com/szaghi/IR_Precision>>, a
                     Fortran (standard 2003) module to develop portable
                     codes_________________________________________________
                     users mailing list_
                     __users@open-mpi.org_ <mailto:us...@open-mpi.org
            <mailto:us...@open-mpi.org>>_

            __http://www.open-mpi.org/__mailman/listinfo.cgi/users_
            <http://www.open-mpi.org/mailman/listinfo.cgi/users_>







                     Denne e-posten kan innehalde informasjon som er
            konfidensiell
                     og/eller underlagt lovbestemt teieplikt. Kun den
            tiltenkte
                     adressat har adgang
                     til å lese eller vidareformidle denne e-posten eller
                     tilhøyrande vedlegg. Dersom De ikkje er den tiltenkte
                     mottakar, vennligst kontakt avsendar pr e-post,
            slett denne
                     e-posten med vedlegg og makuler samtlige utskrifter
            og kopiar
                     av den.

                     This e-mail may contain confidential information,
            or otherwise
                     be protected against unauthorised use. Any disclosure,
                     distribution or other use of the information by
            anyone but the
                     intended recipient is strictly prohibited.
                     If you have received this e-mail in error, please
            advise the
                     sender by immediate reply and destroy the received
            documents
                     and any copies hereof.


                     PBefore
                     printing, think about the environment



                     _________________________________________________
                     users mailing list_
                     __users@open-mpi.org_ <mailto:us...@open-mpi.org
            <mailto:us...@open-mpi.org>>_

            __http://www.open-mpi.org/__mailman/listinfo.cgi/users_
            <http://www.open-mpi.org/mailman/listinfo.cgi/users_>
                     _________________________________________________
                     users mailing list
            us...@open-mpi.org <mailto:us...@open-mpi.org>
            <mailto:us...@open-mpi.org <mailto:us...@open-mpi.org>>
            http://www.open-mpi.org/__mailman/listinfo.cgi/users
            <http://www.open-mpi.org/mailman/listinfo.cgi/users>








                     Denne e-posten kan innehalde informasjon som er
            konfidensiell
                     og/eller underlagt lovbestemt teieplikt. Kun den
            tiltenkte
                     adressat har adgang
                     til å lese eller vidareformidle denne e-posten eller
                     tilhøyrande vedlegg. Dersom De ikkje er den tiltenkte
                     mottakar, vennligst kontakt avsendar pr e-post,
            slett denne
                     e-posten med vedlegg og makuler samtlige utskrifter
            og kopiar
                     av den.

                     This e-mail may contain confidential information,
            or otherwise
                     be protected against unauthorised use. Any disclosure,
                     distribution or other use of the information by
            anyone but the
                     intended recipient is strictly prohibited.
                     If you have received this e-mail in error, please
            advise the
                     sender by immediate reply and destroy the received
            documents
                     and any copies hereof.


                     PBefore
                     printing, think about the environment




                     _________________________________________________
                     users mailing list
            us...@open-mpi.org <mailto:us...@open-mpi.org>
            <mailto:us...@open-mpi.org <mailto:us...@open-mpi.org>>
            http://www.open-mpi.org/__mailman/listinfo.cgi/users
            <http://www.open-mpi.org/mailman/listinfo.cgi/users>


                 _________________________________________________
                 users mailing list
            us...@open-mpi.org <mailto:us...@open-mpi.org>
            <mailto:us...@open-mpi.org
            
<mailto:us...@open-mpi.org>>htt__p://www.open-mpi.org/mailman/__listinfo.cgi/users
            <http://www.open-mpi.org/mailman/listinfo.cgi/users>



             Denne e-posten kan innehalde informasjon som er konfidensiell
             og/eller underlagt lovbestemt teieplikt. Kun den tiltenkte
        adressat
             har adgang til å lese eller vidareformidle denne e-posten eller
             tilhøyrande vedlegg. Dersom De ikkje er den tiltenkte mottakar,
             vennligst kontakt avsendar pr e-post, slett denne e-posten med
             vedlegg og makuler samtlige utskrifter og kopiar av den.

             This e-mail may contain confidential information, or
        otherwise be
             protected against unauthorised use. Any disclosure,
        distribution or
             other use of the information by anyone but the intended
        recipient is
             strictly prohibited. If you have received this e-mail in error,
             please advise the sender by immediate reply and destroy the
        received
             documents and any copies hereof.

             PBefore printing, think about the environment


             _________________________________________________
             users mailing list
        us...@open-mpi.org <mailto:us...@open-mpi.org>
        <mailto:us...@open-mpi.org <mailto:us...@open-mpi.org>>
        http://www.open-mpi.org/__mailman/listinfo.cgi/users
        <http://www.open-mpi.org/mailman/listinfo.cgi/users>




        _________________________________________________
        users mailing list
        us...@open-mpi.org <mailto:us...@open-mpi.org>
        http://www.open-mpi.org/__mailman/listinfo.cgi/users
        <http://www.open-mpi.org/mailman/listinfo.cgi/users>


    _________________________________________________
    users mailing list
    us...@open-mpi.org <mailto:us...@open-mpi.org>
    http://www.open-mpi.org/__mailman/listinfo.cgi/users
    <http://www.open-mpi.org/mailman/listinfo.cgi/users>



_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users

Reply via email to