Re: [OMPI users] PathScale problems persist
I am using GCC 4.x: $ pathCC -v PathScale(TM) Compiler Suite: Version 3.2 Built on: 2008-06-16 16:41:38 -0700 Thread model: posix GNU gcc version 4.2.0 (PathScale 3.2 driver) $ pathCC -show-defaults Optimization level and compilation target: -O2 -mcpu=opteron -m64 -msse -msse2 -mno-sse3 -mno-3dnow -mno-sse4a -gnu4 And I also tried with mpiCC -gnu4 to be totally sure. It's rather weird that I get this error and Ake does not... I configured Open MPI with PathScale with the following line, by the way: ./configure --with-openib=/usr --with-openib-libdir=/usr/lib64 --with-sge --enable-static CC=pathcc CXX=pathCC F77=pathf90 F90=pathf90 FC=pathf90 And with GCC: ./configure --with-openib=/usr --with-openib-libdir=/usr/lib64 --with-sge --enable-static It's not an Infiniband or SGE issue. I also tried with all processes running on the same node and without SGE. Best regards, Rafa On Wed, 2010-09-22 at 14:54 +0200, Ake Sandgren wrote: > On Wed, 2010-09-22 at 14:16 +0200, Ake Sandgren wrote: > > On Wed, 2010-09-22 at 07:42 -0400, Jeff Squyres wrote: > > > This is a problem with the Pathscale compiler and old versions of > GCC. See: > > > > > > > http://www.open-mpi.org/faq/?category=building#pathscale-broken-with-mpi-c%2B%2B-api > > > > > > I note that you said you're already using GCC 4.x, but it's not > clear from your text whether pathscale is using that compiler or a > different GCC on the back-end. If you can confirm that pathscale *is* > using GCC 4.x on the back-end, then this is worth reporting to the > pathscale support people. > > > > I have no problem running the code below compiled with openmpi 1.4.2 > and > > pathscale 3.2. > > And i should of course have specified that this is with a GCC4.x > backend. -- Rafael Arco Arredondo Centro de Servicios de Informática y Redes de Comunicaciones Campus de Fuentenueva - Edificio Mecenas Universidad de Granada E-18071 Granada Spain Tel: +34 958 241010 Ext:31114 E-mail: rafaa...@ugr.es
[OMPI users] PathScale problems persist
Hello, In January, I reported a problem with Open MPI 1.4.1 and PathScale 3.2 about a simple Hello World that hung on initialization ( http://www.open-mpi.org/community/lists/users/2010/01/11863.php ). Open MPI 1.4.2 does not show this problem. However, now we are having trouble with the 1.4.2, PathScale 3.2, and the C++ bindings. The following code: #include #include int main(int argc, char* argv[]) { int node, size; MPI::Init(argc, argv); MPI::COMM_WORLD.Set_errhandler(MPI::ERRORS_THROW_EXCEPTIONS); try { int rank = MPI::COMM_WORLD.Get_rank(); int size = MPI::COMM_WORLD.Get_size(); std::cout << "Hello world from process " << rank << " out of " << size << "!" << std::endl; } catch(MPI::Exception e) { std::cerr << "MPI Error: " << e.Get_error_code() << " - " << e.Get_error_string() << std::endl; } MPI::Finalize(); return 0; } generates the following output: [host1:29934] *** An error occurred in MPI_Comm_set_errhandler [host1:29934] *** on communicator MPI_COMM_WORLD [host1:29934] *** MPI_ERR_COMM: invalid communicator [host1:29934] *** MPI_ERRORS_ARE_FATAL (your MPI job will now abort) -- mpirun has exited due to process rank 2 with PID 29934 on node host1 exiting without calling "finalize". This may have caused other processes in the application to be terminated by signals sent by mpirun (as reported here). -- [host1:29931] 3 more processes have sent help message help-mpi-errors.txt / mpi_errors_are_fatal [host1:29931] Set MCA parameter "orte_base_help_aggregate" to 0 to see all help / error messages There are no problems when Open MPI 1.4.2 is built with GCC (GCC 4.1.2). No problems are found with Open MPI 1.2.6 and PathScale either. Best regards, Rafa -- Rafael Arco Arredondo Centro de Servicios de Informática y Redes de Comunicaciones Campus de Fuentenueva - Edificio Mecenas Universidad de Granada
Re: [OMPI users] Problems building Open MPI 1.4.1 with Pathscale
Hello, It does work with version 1.4. This is the hello world that hangs with 1.4.1: #include #include int main(int argc, char **argv) { int node, size; MPI_Init(&argc,&argv); MPI_Comm_rank(MPI_COMM_WORLD, &node); MPI_Comm_size(MPI_COMM_WORLD, &size); printf("Hello World from Node %d of %d.\n", node, size); MPI_Finalize(); return 0; } El mar, 26-01-2010 a las 03:57 -0500, Åke Sandgren escribió: > 1 - Do you have problems with openmpi 1.4 too? (I don't, haven't built > 1.4.1 yet) > 2 - There is a bug in the pathscale compiler with -fPIC and -g that > generates incorrect dwarf2 data so debuggers get really confused and > will have BIG problems debugging the code. I'm chasing them to get a > fix... > 3 - Do you have an example code that have problems? -- Rafael Arco Arredondo Centro de Servicios de Informática y Redes de Comunicaciones Universidad de Granada
[OMPI users] Problems building Open MPI 1.4.1 with Pathscale
Hello: I'm having some issues with Open MPI 1.4.1 and Pathscale compiler (version 3.2). Open MPI builds successfully with the following configure arguments: ./configure --with-openib=/usr --with-openib-libdir=/usr/lib64 --with-sge --enable-static CC=pathcc CXX=pathCC F77=pathf90 F90=pathf90 FC=pathf90 (we have OpenFabrics 1.2 Infiniband drivers, by the way) However, applications hang on MPI_Init (or maybe MPI_Comm_rank or MPI_Comm_size, a basic hello-world anyway doesn't print 'Hello World from node...'). I tried running them with and without SGE. Same result. This hello-world works flawlessly when I build Open MPI with gcc: ./configure --with-openib=/usr --with-openib-libdir=/usr/lib64 --with-sge --enable-static This successful execution runs in one machine only, so it shouldn't use Infiniband, and it also works when several nodes are used. I was able to build previous versions of Open MPI with Pathscale (1.2.6 and 1.3.2, particularly). I tried building version 1.4.1 both with Pathscale 3.2 and Pathscale 3.1. No difference. Any ideas? Thank you in advance, Rafa -- Rafael Arco Arredondo Centro de Servicios de Informática y Redes de Comunicaciones Universidad de Granada