[OMPI users] OpenMPI 1.2.1: "configure --enable-static": then make ends with error
Hi all, I use gcc-4.1.3 (gcc/g++/gfortran) with openmpi-1.2.1 on an Alpha system with Linux CentOS 4.4. The "--enable-static" configure option causes the make to end with error. Without this configure option, there is no problem with the make. Note that I need the "-mfp-trap-mode=su" compiler flag with this alpha system, to avoid a runtime sigsegv abort. This is what I have done: $ ./configure CPP=/opt/gcc/bin/cpp \ CC=/opt/gcc/bin/gcc CFLAGS=-mfp-trap-mode=su \ CXX=/opt/gcc/bin/g++ CXXFLAGS=-mfp-trap-mode=su \ F77=/opt/gcc/bin/gfortran FFLAGS=-mfp-trap-mode=su \ FC=/opt/gcc/bin/gfortran FCFLAGS=-mfp-trap-mode=su \ --with-wrapper-cflags=-mfp-trap-mode=su \ --with-wrapper-cxxflags=-mfp-trap-mode=su \ --with-wrapper-fflags=-mfp-trap-mode=su \ --with-wrapper-fcflags=mfp-trap-mode=su \ --enable-static --prefix=/opt/openmpi [ ...snip... ] $ make all [ ...snip... ] if /opt/gcc/bin/g++ -DHAVE_CONFIG_H -I. -I. -I../../../opal/include -I../../../orte/include -I../../../ompi/include -I../../../ompi/include -DOMPI_CONFIGURE_USER="\"rob\"" -DOMPI_CONFIGURE_HOST="\"alpha\"" -DOMPI_CONFIGURE_DATE="\"Tue May 1 21:48:43 KST 2007\"" -DOMPI_BUILD_USER="\"$USER\"" -DOMPI_BUILD_HOST="\"`hostname`\"" -DOMPI_BUILD_DATE="\"`date`\"" -DOMPI_BUILD_CFLAGS="\"-O3 -DNDEBUG -mfp-trap-mode=su -finline-functions -fno-strict-aliasing -pthread\"" -DOMPI_BUILD_CPPFLAGS="\"-I../../.. \"" -DOMPI_BUILD_CXXFLAGS="\"-O3 -DNDEBUG -mfp-trap-mode=su -finline-functions -pthread\"" -DOMPI_BUILD_CXXCPPFLAGS="\"-I../../.. \"" -DOMPI_BUILD_FFLAGS="\"-mfp-trap-mode=su\"" -DOMPI_BUILD_FCFLAGS="\"-mfp-trap-mode=su\"" -DOMPI_BUILD_LDFLAGS="\"-export-dynamic \"" -DOMPI_BUILD_LIBS="\"-lnsl -lutil -lm \"" -DOMPI_CC_ABSOLUTE="\"/opt/gcc/bin/gcc\"" -DOMPI_CXX_ABSOLUTE="\"/opt/gcc/bin/g++\"" -DOMPI_F77_ABSOLUTE="\"/opt/gcc/bin/gfortran\"" -DOMPI_F90_ABSOLUTE="\"/opt/gcc/bin/gfortran\"" -DOMPI_F90_BUILD_SIZE="\"small\"" -I../../..-O3 -DNDEBUG -mfp-trap-mode=su -finline-functions -pthread -MT version.o -MD -MP -MF "$depbase.Tpo" -c -o version.o version.cc; \ then mv -f "$depbase.Tpo" "$depbase.Po"; else rm -f "$depbase.Tpo"; exit 1; fi /bin/sh ../../../libtool --tag=CXX --mode=link /opt/gcc/bin/g++ -O3 -DNDEBUG -mfp-trap-mode=su -finline-functions -pthread -export-dynamic -o ompi_info components.o ompi_info.o output.o param.o version.o ../../../ompi/libmpi.la -lnsl -lutil -lm libtool: link: /opt/gcc/bin/g++ -O3 -DNDEBUG -mfp-trap-mode=su -finline-functions -pthread -o .libs/ompi_info components.o ompi_info.o output.o param.o version.o -Wl,--export-dynamic ../../../ompi/.libs/libmpi.so -libverbs -lrt /home/lahaye/Software/openmpi-1.2.1/orte/.libs/libopen-rte.so -pthread /home/lahaye/Software/openmpi-1.2.1/opal/.libs/libopen-pal.so -ldl -lnsl -lutil -lm -Wl,-rpath -Wl,/opt/openmpi/lib ../../../ompi/.libs/libmpi.so: undefined reference to `opal_sys_timer_get_cycles' collect2: ld returned 1 exit status make[2]: *** [ompi_info] Error 1 make[2]: Leaving directory `/home/lahaye/Software/openmpi-1.2.1/ompi/tools/ompi_info' make[1]: *** [all-recursive] Error 1 make[1]: Leaving directory `/home/lahaye/Software/openmpi-1.2.1/ompi' make: *** [all-recursive] Error 1 Any idea why this goes wrong? Once again, when I remove the "--enable-static" in the configure line, then all goes well But I need the static libraries for compiling Blacs/ScaLapack. Thanks, Rob. - Ahhh...imagining that irresistible "new car" smell? Check outnew cars at Yahoo! Autos.
Re: [OMPI users] OpenMPI 1.2.1: "configure --enable-static": then make ends with error
Brian -- Is this due to missing assembly functionality for Alpha platforms in opal? (I'm not sure why it would work with dynamic if that were the case, though...) On May 1, 2007, at 6:41 AM, Rob wrote: Hi all, I use gcc-4.1.3 (gcc/g++/gfortran) with openmpi-1.2.1 on an Alpha system with Linux CentOS 4.4. The "--enable-static" configure option causes the make to end with error. Without this configure option, there is no problem with the make. Note that I need the "-mfp-trap-mode=su" compiler flag with this alpha system, to avoid a runtime sigsegv abort. This is what I have done: $ ./configure CPP=/opt/gcc/bin/cpp \ CC=/opt/gcc/bin/gcc CFLAGS=-mfp-trap-mode=su \ CXX=/opt/gcc/bin/g++ CXXFLAGS=-mfp-trap-mode=su \ F77=/opt/gcc/bin/gfortran FFLAGS=-mfp-trap-mode=su \ FC=/opt/gcc/bin/gfortran FCFLAGS=-mfp-trap-mode=su \ --with-wrapper-cflags=-mfp-trap-mode=su \ --with-wrapper-cxxflags=-mfp-trap-mode=su \ --with-wrapper-fflags=-mfp-trap-mode=su \ --with-wrapper-fcflags=mfp-trap-mode=su \ --enable-static --prefix=/opt/openmpi [ ...snip... ] $ make all [ ...snip... ] if /opt/gcc/bin/g++ -DHAVE_CONFIG_H -I. -I. -I../../../opal/include -I../../../orte/include -I../../../ompi/include -I../../../ompi/ include -DOMPI_CONFIGURE_USER="\"rob\"" - DOMPI_CONFIGURE_HOST="\"alpha\"" -DOMPI_CONFIGURE_DATE="\"Tue May 1 21:48:43 KST 2007\"" -DOMPI_BUILD_USER="\"$USER\"" - DOMPI_BUILD_HOST="\"`hostname`\"" -DOMPI_BUILD_DATE="\"`date`\"" - DOMPI_BUILD_CFLAGS="\"-O3 -DNDEBUG -mfp-trap-mode=su -finline- functions -fno-strict-aliasing -pthread\"" - DOMPI_BUILD_CPPFLAGS="\"-I../../.. \"" -DOMPI_BUILD_CXXFLAGS="\"- O3 -DNDEBUG -mfp-trap-mode=su -finline-functions -pthread\"" - DOMPI_BUILD_CXXCPPFLAGS="\"-I../../.. \"" -DOMPI_BUILD_FFLAGS="\"- mfp-trap-mode=su\"" -DOMPI_BUILD_FCFLAGS="\"-mfp-trap-mode=su\"" - DOMPI_BUILD_LDFLAGS="\"-export-dynamic \"" -DOMPI_BUILD_LIBS="\"- lnsl -lutil -lm \"" -DOMPI_CC_ABSOLUTE="\"/opt/gcc/bin/gcc\"" - DOMPI_CXX_ABSOLUTE="\"/opt/gcc/bin/g++\"" -DOMPI_F77_ABSOLUTE="\"/ opt/gcc/bin/gfortran\"" -DOMPI_F90_ABSOLUTE="\"/opt/gcc/bin/gfortran \"" -DOMPI_F90_BUILD_SIZE="\"small\"" -I../../..-O3 -DNDEBUG - mfp-trap-mode=su -finline-functions -pthread -MT version.o -MD -MP - MF "$depbase.Tpo" -c -o version.o version.cc; \ then mv -f "$depbase.Tpo" "$depbase.Po"; else rm -f "$depbase.Tpo"; exit 1; fi /bin/sh ../../../libtool --tag=CXX --mode=link /opt/gcc/bin/g++ - O3 -DNDEBUG -mfp-trap-mode=su -finline-functions -pthread -export- dynamic -o ompi_info components.o ompi_info.o output.o param.o version.o ../../../ompi/libmpi.la -lnsl -lutil -lm libtool: link: /opt/gcc/bin/g++ -O3 -DNDEBUG -mfp-trap-mode=su - finline-functions -pthread -o .libs/ompi_info components.o ompi_info.o output.o param.o version.o -Wl,--export- dynamic ../../../ompi/.libs/libmpi.so -libverbs -lrt /home/lahaye/ Software/openmpi-1.2.1/orte/.libs/libopen-rte.so -pthread /home/ lahaye/Software/openmpi-1.2.1/opal/.libs/libopen-pal.so -ldl -lnsl - lutil -lm -Wl,-rpath -Wl,/opt/openmpi/lib ../../../ompi/.libs/libmpi.so: undefined reference to `opal_sys_timer_get_cycles' collect2: ld returned 1 exit status make[2]: *** [ompi_info] Error 1 make[2]: Leaving directory `/home/lahaye/Software/openmpi-1.2.1/ ompi/tools/ompi_info' make[1]: *** [all-recursive] Error 1 make[1]: Leaving directory `/home/lahaye/Software/openmpi-1.2.1/ompi' make: *** [all-recursive] Error 1 Any idea why this goes wrong? Once again, when I remove the "--enable-static" in the configure line, then all goes well But I need the static libraries for compiling Blacs/ScaLapack. Thanks, Rob. Ahhh...imagining that irresistible "new car" smell? Check out new cars at Yahoo! Autos. ___ users mailing list us...@open-mpi.org http://www.open-mpi.org/mailman/listinfo.cgi/users -- Jeff Squyres Cisco Systems
Re: [OMPI users] MPI_Comm_Accept / MPI::Comm::Accept problem.
For the moment, a possible workaround might be to use plain TCP sockets (i.e., outside of MPI) to make the initial connection. That way, you can just have your server blocking in accept(). After the TCP connection is made, use MPI_COMM_JOIN to create a communicator and then proceed with normal MPI communications after that. On Apr 28, 2007, at 1:07 PM, Nuno Sucena Almeida wrote: Hi Jeff, thanks for taking the time to answer this. I actually reached that conclusion after trying a simple MPI::Barrier() with both OpenMPI and Lam-MPI , where both had the same active wait kind of behaviour. What I'm trying to achive is to have some kind of calculation server, where the clients can connect through MPI::Intercomm to the server process with rank 0, and transfer data so that it can perform computation, but it seems wasteful to have a server group of processes running at 100% while waiting for the clients. It would be nice to be able to specify the behaviour in this case, or do you suggest another approach? Cheers, Nuno On Fri, Apr 27, 2007 at 07:49:04PM -0400, Jeff Squyres wrote: | This is actually expected behavior. We make the assumption that MPI | processes are meant to exhibit as low latency as possible, and | therefore use active polling for most message passing. ___ users mailing list us...@open-mpi.org http://www.open-mpi.org/mailman/listinfo.cgi/users -- Jeff Squyres Cisco Systems
[OMPI users] Torque and OpenMPI 1.2.1 problems
We have built OpenMPI 1.2.1 with support for Torque 2.1.8 and its Task Manager interface. We use the PGI 6.2-4 compiler and the --with-tm option as described in http://www.open-mpi.org/faq/?category=building#build-rte-tm for building an OpenMPI RPM on a Pentium-4 machine running CentOS 4.4 (RHEL4U4 clone). The TM interface seems to be available as it should: # ompi_info | grep tm MCA memory: ptmalloc2 (MCA v1.0, API v1.0, Component v1.2.1) MCA ras: tm (MCA v1.0, API v1.3, Component v1.2.1) MCA pls: tm (MCA v1.0, API v1.3, Component v1.2.1) When we submit a Torque batch job running the example code in openmpi-1.2.1/examples/hello_c.c we get this error message: /usr/local/openmpi-1.2.1-pgi/bin/mpirun -np 2 -machinefile $PBS_NODEFILE hello_c [u126.dcsc.fysik.dtu.dk:11981] pls:tm: failed to poll for a spawned proc, return status = 17002 [u126.dcsc.fysik.dtu.dk:11981] [0,0,0] ORTE_ERROR_LOG: In errno in file rmgr_urm.c at line 462 [u126.dcsc.fysik.dtu.dk:11981] mpirun: spawn failed with errno=-11 When we run the same code in an interactive (non-Torque) shell the hello_c code works correctly: # /usr/local/openmpi-1.2.1-pgi/bin/mpirun -np 2 -machinefile hostfile hello_c Hello, world, I am 0 of 2 Hello, world, I am 1 of 2 To prove that the Torque TM interface is working correctly we also make this test within the Torque batch job using the Torque pbsdsh command: pbsdsh hostname u126.dcsc.fysik.dtu.dk u113.dcsc.fysik.dtu.dk So obviously something is broken between Torque 2.1.8 and OpenMPI 1.2.1 with respect to the TM interface, whereas either one alone seems to work correctly. Can anyone suggest a solution to this problem ? I wonder if this problem may be related to this list thread: http://www.open-mpi.org/community/lists/users/2007/04/3028.php Details of configuration: - We use the buildrpm.sh script from http://www.open-mpi.org/software/ompi/v1.2/srpm.php and change the following options in the script: prefix="/usr/local/openmpi-1.2.1-pgi" configure_options="--with-tm=/usr/local FC=pgf90 F77=pgf90 CC=pgcc CXX=pgCC CFLAGS=-Msignextend CXXFLAGS=-Msignextend --with-wrapper-cflags=-Msignextend --with-wrapper-cxxflags=-Msignextend FFLAGS =-Msignextend FCFLAGS=-Msignextend --with-wrapper-fflags=-Msignextend --with-wrapper-fcflags=-Msignextend" rpmbuild_options=${rpmbuild_options}" --define 'install_in_opt 0' --define 'install_shell_scripts 1' --define 'install_modulefile 0'" rpmbuild_options=${rpmbuild_options}" --define '_prefix ${prefix}'" build_single=yes -- Ole Holm Nielsen Department of Physics, Technical University of Denmark
Re: [OMPI users] OpenMPI 1.2.1: "configure --enable-static": then make ends with error
That is odd... Alpha Linux isn't one of our supported platforms, so it doesn't get tested before release unless a user happens to try it. Can you send the information requested here: http://www.open-mpi.org/community/help/ That should help us figure out what happened. Thanks, Brian On May 1, 2007, at 7:41 AM, Rob wrote: Hi all, I use gcc-4.1.3 (gcc/g++/gfortran) with openmpi-1.2.1 on an Alpha system with Linux CentOS 4.4. The "--enable-static" configure option causes the make to end with error. Without this configure option, there is no problem with the make. Note that I need the "-mfp-trap-mode=su" compiler flag with this alpha system, to avoid a runtime sigsegv abort. This is what I have done: $ ./configure CPP=/opt/gcc/bin/cpp \ CC=/opt/gcc/bin/gcc CFLAGS=-mfp-trap-mode=su \ CXX=/opt/gcc/bin/g++ CXXFLAGS=-mfp-trap-mode=su \ F77=/opt/gcc/bin/gfortran FFLAGS=-mfp-trap-mode=su \ FC=/opt/gcc/bin/gfortran FCFLAGS=-mfp-trap-mode=su \ --with-wrapper-cflags=-mfp-trap-mode=su \ --with-wrapper-cxxflags=-mfp-trap-mode=su \ --with-wrapper-fflags=-mfp-trap-mode=su \ --with-wrapper-fcflags=mfp-trap-mode=su \ --enable-static --prefix=/opt/openmpi [ ...snip... ] $ make all [ ...snip... ] if /opt/gcc/bin/g++ -DHAVE_CONFIG_H -I. -I. -I../../../opal/include -I../../../orte/include -I../../../ompi/include -I../../../ompi/ include -DOMPI_CONFIGURE_USER="\"rob\"" - DOMPI_CONFIGURE_HOST="\"alpha\"" -DOMPI_CONFIGURE_DATE="\"Tue May 1 21:48:43 KST 2007\"" -DOMPI_BUILD_USER="\"$USER\"" - DOMPI_BUILD_HOST="\"`hostname`\"" -DOMPI_BUILD_DATE="\"`date`\"" - DOMPI_BUILD_CFLAGS="\"-O3 -DNDEBUG -mfp-trap-mode=su -finline- functions -fno-strict-aliasing -pthread\"" - DOMPI_BUILD_CPPFLAGS="\"-I../../.. \"" -DOMPI_BUILD_CXXFLAGS="\"- O3 -DNDEBUG -mfp-trap-mode=su -finline-functions -pthread\"" - DOMPI_BUILD_CXXCPPFLAGS="\"-I../../.. \"" -DOMPI_BUILD_FFLAGS="\"- mfp-trap-mode=su\"" -DOMPI_BUILD_FCFLAGS="\"-mfp-trap-mode=su\"" - DOMPI_BUILD_LDFLAGS="\"-export-dynamic \"" -DOMPI_BUILD_LIBS="\"- lnsl -lutil -lm \"" -DOMPI_CC_ABSOLUTE="\"/opt/gcc/bin/gcc\"" - DOMPI_CXX_ABSOLUTE="\"/opt/gcc/bin/g++\"" -DOMPI_F77_ABSOLUTE="\"/ opt/gcc/bin/gfortran\"" -DOMPI_F90_ABSOLUTE="\"/opt/gcc/bin/gfortran \"" -DOMPI_F90_BUILD_SIZE="\"small\"" -I../../..-O3 -DNDEBUG - mfp-trap-mode=su -finline-functions -pthread -MT version.o -MD -MP - MF "$depbase.Tpo" -c -o version.o version.cc; \ then mv -f "$depbase.Tpo" "$depbase.Po"; else rm -f "$depbase.Tpo"; exit 1; fi /bin/sh ../../../libtool --tag=CXX --mode=link /opt/gcc/bin/g++ - O3 -DNDEBUG -mfp-trap-mode=su -finline-functions -pthread -export- dynamic -o ompi_info components.o ompi_info.o output.o param.o version.o ../../../ompi/libmpi.la -lnsl -lutil -lm libtool: link: /opt/gcc/bin/g++ -O3 -DNDEBUG -mfp-trap-mode=su - finline-functions -pthread -o .libs/ompi_info components.o ompi_info.o output.o param.o version.o -Wl,--export- dynamic ../../../ompi/.libs/libmpi.so -libverbs -lrt /home/lahaye/ Software/openmpi-1.2.1/orte/.libs/libopen-rte.so -pthread /home/ lahaye/Software/openmpi-1.2.1/opal/.libs/libopen-pal.so -ldl -lnsl - lutil -lm -Wl,-rpath -Wl,/opt/openmpi/lib ../../../ompi/.libs/libmpi.so: undefined reference to `opal_sys_timer_get_cycles' collect2: ld returned 1 exit status make[2]: *** [ompi_info] Error 1 make[2]: Leaving directory `/home/lahaye/Software/openmpi-1.2.1/ ompi/tools/ompi_info' make[1]: *** [all-recursive] Error 1 make[1]: Leaving directory `/home/lahaye/Software/openmpi-1.2.1/ompi' make: *** [all-recursive] Error 1 Any idea why this goes wrong? Once again, when I remove the "--enable-static" in the configure line, then all goes well But I need the static libraries for compiling Blacs/ScaLapack. Thanks, Rob. Ahhh...imagining that irresistible "new car" smell? Check out new cars at Yahoo! Autos. ___ users mailing list us...@open-mpi.org http://www.open-mpi.org/mailman/listinfo.cgi/users
Re: [OMPI users] OpenMPI 1.2.1: "configure --enable-static": then make ends with error
Brian Barrett wrote on 2007-05-01 10:58:45 : > Can you send the information requested here: > http://www.open-mpi.org/community/help/ > That should help us figure out what happened. Thank you for the quick response. The output files together are over 100 kB, so I can't send them as attachment to this mailinglist. Please take them from here: http://www.lahaye.dds.nl/openmpi/ I have two Alpha systems with each having four 1GHz processors, 10 GB ram and SCSI disks; yes, I know Alpha is prehistoric hardware However, with a working version of mpi, I could still use these machines at a fairly reasonable speed. So if the patch is small, I'd like to fix the problem with the 1.2.1 release of openmpi on my system. Also, I happily want to help with testing patches so that openmpi also is going to work for Alpha systems. But keep in mind that I'm not an MPI expert. Thanks! Rob. - Ahhh...imagining that irresistible "new car" smell? Check outnew cars at Yahoo! Autos.
Re: [OMPI users] Torque and OpenMPI 1.2.1 problems
The most likely problem is that you have a path or library issue regarding the location of the OpenMPI/OpenRTE executables when running batch versus interactive. We see this sometimes when the shell startups differ in those two modes. You might try just running a batch vs interactive printenv to see if differences exist. As far as I know, there are no compatibility issues with Torque at this time. Ralph On 5/1/07 8:54 AM, "Ole Holm Nielsen" wrote: > We have built OpenMPI 1.2.1 with support for Torque 2.1.8 and its > Task Manager interface. We use the PGI 6.2-4 compiler and the > --with-tm option as described in > http://www.open-mpi.org/faq/?category=building#build-rte-tm > for building an OpenMPI RPM on a Pentium-4 machine running CentOS 4.4 > (RHEL4U4 clone). The TM interface seems to be available as it should: > > # ompi_info | grep tm >MCA memory: ptmalloc2 (MCA v1.0, API v1.0, Component v1.2.1) > MCA ras: tm (MCA v1.0, API v1.3, Component v1.2.1) > MCA pls: tm (MCA v1.0, API v1.3, Component v1.2.1) > > When we submit a Torque batch job running the example code in > openmpi-1.2.1/examples/hello_c.c we get this error message: > > /usr/local/openmpi-1.2.1-pgi/bin/mpirun -np 2 -machinefile $PBS_NODEFILE > hello_c > [u126.dcsc.fysik.dtu.dk:11981] pls:tm: failed to poll for a spawned proc, > return > status = 17002 > [u126.dcsc.fysik.dtu.dk:11981] [0,0,0] ORTE_ERROR_LOG: In errno in file > rmgr_urm.c at line 462 > [u126.dcsc.fysik.dtu.dk:11981] mpirun: spawn failed with errno=-11 > > When we run the same code in an interactive (non-Torque) shell the > hello_c code works correctly: > > # /usr/local/openmpi-1.2.1-pgi/bin/mpirun -np 2 -machinefile hostfile hello_c > Hello, world, I am 0 of 2 > Hello, world, I am 1 of 2 > > To prove that the Torque TM interface is working correctly we also make this > test within the Torque batch job using the Torque pbsdsh command: > > pbsdsh hostname > u126.dcsc.fysik.dtu.dk > u113.dcsc.fysik.dtu.dk > > So obviously something is broken between Torque 2.1.8 and OpenMPI 1.2.1 > with respect to the TM interface, whereas either one alone seems to work > correctly. Can anyone suggest a solution to this problem ? > > I wonder if this problem may be related to this list thread: > http://www.open-mpi.org/community/lists/users/2007/04/3028.php > > Details of configuration: > - > > We use the buildrpm.sh script from > http://www.open-mpi.org/software/ompi/v1.2/srpm.php > and change the following options in the script: > > prefix="/usr/local/openmpi-1.2.1-pgi" > > configure_options="--with-tm=/usr/local FC=pgf90 F77=pgf90 CC=pgcc CXX=pgCC > CFLAGS=-Msignextend CXXFLAGS=-Msignextend --with-wrapper-cflags=-Msignextend > --with-wrapper-cxxflags=-Msignextend FFLAGS > =-Msignextend FCFLAGS=-Msignextend --with-wrapper-fflags=-Msignextend > --with-wrapper-fcflags=-Msignextend" > rpmbuild_options=${rpmbuild_options}" --define 'install_in_opt 0' --define > 'install_shell_scripts 1' --define 'install_modulefile 0'" > rpmbuild_options=${rpmbuild_options}" --define '_prefix ${prefix}'" > > build_single=yes
Re: [OMPI users] Torque and OpenMPI 1.2.1 problems
Thanks for the suggestion. I inserted a printenv command and the path and library variables seem to be correct for our OpenMPI installation: LD_LIBRARY_PATH=/usr/local/openmpi-1.2.1-pgi/lib:/opt/intel/compiler90/lib MPIHOME=/usr/local/openmpi-1.2.1-pgi PATH=/usr/local/openmpi-1.2.1-pgi/bin:/usr/local/openmpi-1.2.1-pgi/bin:/usr/kerberos/bin:/bin:/usr/bin:/usr/local/lam-7.1.2-pgi/bin:/opt/intel/compiler90/bin:/usr/local/bin:/usr/X11R6/bin Does OpenMPI have any issues when installed in non-default directories such as /usr/local/openmpi-1.2.1-pgi ? Ralph Castain wrote: The most likely problem is that you have a path or library issue regarding the location of the OpenMPI/OpenRTE executables when running batch versus interactive. We see this sometimes when the shell startups differ in those two modes. You might try just running a batch vs interactive printenv to see if differences exist. As far as I know, there are no compatibility issues with Torque at this time. Thanks, Ole
[OMPI users] Alpha system & OpenMPI 1.2.1 does not work...
Hi, A few emails back I reported that I could build openmpi on Alpha system (except the static libraries). However, it seems that the built result is unusable. With every simple program (even non-mpi) I compile, I get: $ mpicc myprog.c --showme:version mpicc: Open MPI 1.2.1 (Language: C) $ mpicc myprog.c gcc: dummy: No such file or directory gcc: ranlib: No such file or directory $ mpicc myprog.c --showme /opt/gcc/bin/gcc -I/opt/openmpi/include/openmpi -I/opt/openmpi/include -pthread -mfp-trap-mode=su myprog.c -L/opt/openmpi/lib -lmpi -lopen-rte -lopen-pal -ldl dummy ranlib (Note: the "-mfp-trap-mode=su" prevents a runtime SIGSEGV crash with GNU compiler on Alpha system) $ mpicc myprog.c --showme:link -pthread -mfp-trap-mode=su myprog.c -L/opt/openmpi/lib -lmpi -lopen-rte -lopen-pal -ldl dummy ranlib What is the "dummy" and "ranlib" doing here? I'm now trying the nightly build from SVN (version 1.3a1r14551), but I'm afraid that Alpha support is still not there.if that's the case, is there any chance to fix openmpi for Alpha? My OS is CentOS 4.4 (the equivalent of RedHat Enterprise Edition 4). Hence, my packages are not so up-to-date versions: autoconf-2.59-5 automake15-1.5-13 automake-1.9.2-3 automake14-1.4p6-12 automake17-1.7.9-5 automake16-1.6.3-5 libtool-1.5.6-4.EL4.1.c4.2 libtool-libs-1.5.6-4.EL4.1.c4.2 flex-2.5.4a-33 (what else is essential to build OpenMpi?) Any ideas what to do? Thanks, Rob. __ Do You Yahoo!? Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com
Re: [OMPI users] Alpha system & OpenMPI 1.2.1 does not work...
Rob wrote at 2007-05-01 21:11:26 : > I'm now trying the nightly build from SVN > (version 1.3a1r14551), but I'm afraid that Alpha > support is still not there.if that's the case, > is there any chance to fix openmpi for Alpha? Indeed this fails with the same error as the compilation of 1.2.1 with "--enable-static". Output files of this 1.3/SVN are at http://www.lahaye.dds.nl/openmpi/ > My OS is CentOS 4.4 > (the equivalent of RedHat Enterprise Edition 4). > Hence, my packages are not so up-to-date versions: > > autoconf-2.59-5 > automake-1.9.2-3 > libtool-1.5.6-4.EL4.1.c4.2 > libtool-libs-1.5.6-4.EL4.1.c4.2 > flex-2.5.4a-33 > (what else is essential to build OpenMpi?) The version numbers of these packages should be OK, as I have also CentOS 4.4 on an HP Intel/Itanium workstation and on an Intel Pentium 4, all with the same package versions; and OpenMPI 1.2.1 configures/compiles/works very well there !! By the way, I don't think the above packages are required for building OpenMPI from the 1.2.1 source tarball, or are they? Regards, Rob. __ Do You Yahoo!? Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com