I notice that you are using the "medium" sized F90 bindings. Do these FAQ entries help?

http://www.open-mpi.org/faq/?category=mpi-apps#f90-mpi-slow-compiles
http://www.open-mpi.org/faq/?category=building#f90-bindings-slow-compile


On Mar 27, 2007, at 2:21 AM, de Almeida, Valmor F. wrote:


Hello,

I am using mpic++ to create a program that combines c++ and f90
libraries. The libraries are created with mpic++ and mpif90. OpenMPI-1.2
was built using gcc-4.1.1. (below follows the output of ompi_info. The
final linking stage takes quite a long time compared to the creation of
the libraries; I am wondering why and whether there is a way to speed
up.

Thanks for any inputs.

--
Valmor

->./ompi_info
                Open MPI: 1.2
   Open MPI SVN revision: r14027
                Open RTE: 1.2
   Open RTE SVN revision: r14027
                    OPAL: 1.2
       OPAL SVN revision: r14027
                  Prefix: /usr/local/openmpi-1.2
 Configured architecture: i686-pc-linux-gnu
           Configured by: root
           Configured on: Sun Mar 18 23:47:21 EDT 2007
          Configure host: xeon0
                Built by: root
                Built on: Sun Mar 18 23:57:41 EDT 2007
              Built host: xeon0
              C bindings: yes
            C++ bindings: yes
      Fortran77 bindings: yes (all)
      Fortran90 bindings: yes
 Fortran90 bindings size: medium
              C compiler: cc
     C compiler absolute: /usr/bin/cc
            C++ compiler: g++
   C++ compiler absolute: /usr/bin/g++
      Fortran77 compiler: gfortran
Fortran77 compiler abs: /usr/i686-pc-linux-gnu/gcc-bin/4.1.1/ gfortran
      Fortran90 compiler: gfortran
Fortran90 compiler abs: /usr/i686-pc-linux-gnu/gcc-bin/4.1.1/ gfortran
             C profiling: yes
           C++ profiling: yes
     Fortran77 profiling: yes
     Fortran90 profiling: yes
          C++ exceptions: no
          Thread support: posix (mpi: no, progress: no)
  Internal debug support: no
     MPI parameter check: always
Memory profiling support: no
Memory debugging support: no
         libltdl support: yes
   Heterogeneous support: yes
 mpirun default --prefix: no
MCA backtrace: execinfo (MCA v1.0, API v1.0, Component v1.2) MCA memory: ptmalloc2 (MCA v1.0, API v1.0, Component v1.2)
           MCA paffinity: linux (MCA v1.0, API v1.0, Component v1.2)
MCA maffinity: first_use (MCA v1.0, API v1.0, Component v1.2)
               MCA timer: linux (MCA v1.0, API v1.0, Component v1.2)
           MCA allocator: basic (MCA v1.0, API v1.0, Component v1.0)
           MCA allocator: bucket (MCA v1.0, API v1.0, Component v1.0)
                MCA coll: basic (MCA v1.0, API v1.0, Component v1.2)
                MCA coll: self (MCA v1.0, API v1.0, Component v1.2)
                MCA coll: sm (MCA v1.0, API v1.0, Component v1.2)
                MCA coll: tuned (MCA v1.0, API v1.0, Component v1.2)
                  MCA io: romio (MCA v1.0, API v1.0, Component v1.2)
               MCA mpool: sm (MCA v1.0, API v1.0, Component v1.2)
                 MCA pml: cm (MCA v1.0, API v1.0, Component v1.2)
                 MCA pml: ob1 (MCA v1.0, API v1.0, Component v1.2)
                 MCA bml: r2 (MCA v1.0, API v1.0, Component v1.2)
              MCA rcache: rb (MCA v1.0, API v1.0, Component v1.2)
              MCA rcache: vma (MCA v1.0, API v1.0, Component v1.2)
                 MCA btl: self (MCA v1.0, API v1.0.1, Component v1.2)
                 MCA btl: sm (MCA v1.0, API v1.0.1, Component v1.2)
                 MCA btl: tcp (MCA v1.0, API v1.0.1, Component v1.0)
                MCA topo: unity (MCA v1.0, API v1.0, Component v1.2)
                 MCA osc: pt2pt (MCA v1.0, API v1.0, Component v1.2)
              MCA errmgr: hnp (MCA v1.0, API v1.3, Component v1.2)
              MCA errmgr: orted (MCA v1.0, API v1.3, Component v1.2)
              MCA errmgr: proxy (MCA v1.0, API v1.3, Component v1.2)




_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users


--
Jeff Squyres
Cisco Systems

Reply via email to