Hi,

I did a few things wrong before:

1. The new name of the component "pls" is "plm".
2. It seems for the components, now a ":" separation is used instead of
a "-" separation.

Anyway, for me specifying "--enable-mca-static=plm:tm" seems to fix the
problem - I still have shared libraries for Open MPI with statically
compiled Torque support.

Cheers,
Kiril

On Thu, 2009-01-29 at 12:37 -0700, Ralph Castain wrote:
> On a Torque system, your job is typically started on a backend node.  
> Thus, you need to have the Torque libraries installed on those nodes -  
> or else build OMPI static, as you found.
> 
> I have never tried --enable-mca-static, so I have no idea if this  
> works or what it actually does. If I want static, I just build the  
> entire tree that way.
> 
> If you want to run dynamic, though, you'll have to make the Torque  
> libs available on the backend nodes.
> 
> Ralph
> 
> 
> On Jan 29, 2009, at 8:32 AM, Kiril Dichev wrote:
> 
> > Hi,
> >
> > I am trying to run with Open MPI 1.3 on a cluster using PBS Pro:
> >
> > pbs_version = PBSPro_9.2.0.81361
> >
> >
> > However, after compiling with these options:
> >
> > ../configure
> > --prefix=/home_nfs/parma/x86_64/UNITE/packages/openmpi/1.3- 
> > intel10.1-64bit-dynamic-threads CC=/opt/intel/cce/10.1.015/bin/icc  
> > CXX=/opt/intel/cce/10.1.015/bin/icpc CPP="/opt/intel/cce/10.1.015/ 
> > bin/icc -E" FC=/opt/intel/fce/10.1.015/bin/ifort F90=/opt/intel/fce/ 
> > 10.1.015/bin/ifort F77=/opt/intel/fce/10.1.015/bin/ifort --enable- 
> > mpi-f90 --with-tm=/usr/pbs/ --enable-mpi-threads=yes --enable- 
> > contrib-no-build=vt
> >
> > I get runtime errors when running on more than one reserved node
> > even /bin/hostname:
> >
> > /home_nfs/parma/x86_64/UNITE/packages/openmpi/1.3-intel10.1-64bit- 
> > dynamic-threads/bin/mpirun  -np 5  /bin/hostname
> > /home_nfs/parma/x86_64/UNITE/packages/openmpi/1.3-intel10.1-64bit- 
> > dynamic-threads/bin/mpirun: symbol lookup error: /home_nfs/parma/ 
> > x86_64/UNITE/packages/openmpi/1.3-intel10.1-64bit-dynamic-threads/ 
> > lib/openmpi/mca_plm_tm.so: undefined symbol: tm_init
> >
> > When running on one node only, I don't get this error.
> >
> > Now, I see that I only have static PBS libraries so I tried to compile
> > this component statically. I added to the above configure:
> > "--enable-mca-static=ras-tm,pls-tm"
> >
> > However, nothing changed. The same errors occurr.
> >
> >
> > But if I compile Open MPI only with static libraries ("--enable-static
> > --disable-shared"), the MPI (or non-MPI) programs run OK.
> >
> > Can you help me here ?
> >
> > Thanks,
> > Kiril
> >
> >
> >
> > -- 
> > Dipl.-Inf. Kiril Dichev
> > Tel.: +49 711 685 60492
> > E-mail: dic...@hlrs.de
> > High Performance Computing Center Stuttgart (HLRS)
> > Universität Stuttgart
> > 70550 Stuttgart
> > Germany
> >
> >
> > _______________________________________________
> > users mailing list
> > us...@open-mpi.org
> > http://www.open-mpi.org/mailman/listinfo.cgi/users
> 
-- 
Dipl.-Inf. Kiril Dichev
Tel.: +49 711 685 60492
E-mail: dic...@hlrs.de
High Performance Computing Center Stuttgart (HLRS)
Universität Stuttgart
70550 Stuttgart
Germany


Reply via email to