This is a problem with the Pathscale compiler and old versions of GCC.  See:

    
http://www.open-mpi.org/faq/?category=building#pathscale-broken-with-mpi-c%2B%2B-api

I note that you said you're already using GCC 4.x, but it's not clear from your 
text whether pathscale is using that compiler or a different GCC on the 
back-end.  If you can confirm that pathscale *is* using GCC 4.x on the 
back-end, then this is worth reporting to the pathscale support people.



On Sep 21, 2010, at 7:31 AM, Rafael Arco Arredondo wrote:

> Hello,
> 
> In January, I reported a problem with Open MPI 1.4.1 and PathScale 3.2
> about a simple Hello World that hung on initialization
> ( http://www.open-mpi.org/community/lists/users/2010/01/11863.php ).
> Open MPI 1.4.2 does not show this problem.
> 
> However, now we are having trouble with the 1.4.2, PathScale 3.2, and
> the C++ bindings. The following code:
> 
> #include <iostream>
> #include <mpi.h>
> 
> int main(int argc, char* argv[]) {
>  int node, size;
> 
>  MPI::Init(argc, argv);
>  MPI::COMM_WORLD.Set_errhandler(MPI::ERRORS_THROW_EXCEPTIONS);
> 
>  try {
>    int rank = MPI::COMM_WORLD.Get_rank();
>    int size = MPI::COMM_WORLD.Get_size();
> 
>    std::cout << "Hello world from process " << rank << " out of "
>      << size << "!" << std::endl;
>  }
> 
>  catch(MPI::Exception e) {
>    std::cerr << "MPI Error: " << e.Get_error_code()
>      << " - " << e.Get_error_string() << std::endl;
>  }
> 
>  MPI::Finalize();
>  return 0;
> }
> 
> generates the following output:
> 
> [host1:29934] *** An error occurred in MPI_Comm_set_errhandler
> [host1:29934] *** on communicator MPI_COMM_WORLD
> [host1:29934] *** MPI_ERR_COMM: invalid communicator
> [host1:29934] *** MPI_ERRORS_ARE_FATAL (your MPI job will now abort)
> --------------------------------------------------------------------------
> mpirun has exited due to process rank 2 with PID 29934 on
> node host1 exiting without calling "finalize". This may
> have caused other processes in the application to be
> terminated by signals sent by mpirun (as reported here).
> --------------------------------------------------------------------------
> [host1:29931] 3 more processes have sent help message
> help-mpi-errors.txt / mpi_errors_are_fatal
> [host1:29931] Set MCA parameter "orte_base_help_aggregate" to 0 to see
> all help / error messages
> 
> There are no problems when Open MPI 1.4.2 is built with GCC (GCC 4.1.2).
> No problems are found with Open MPI 1.2.6 and PathScale either.
> 
> Best regards,
> 
> Rafa
> 
> -- 
> Rafael Arco Arredondo
> Centro de Servicios de Informática y Redes de Comunicaciones
> Campus de Fuentenueva - Edificio Mecenas
> Universidad de Granada
> 
> 
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users


-- 
Jeff Squyres
jsquy...@cisco.com
For corporate legal information go to:
http://www.cisco.com/web/about/doing_business/legal/cri/


Reply via email to