My C++ is a little rusty.  Is that returned intercommunicator going
where you think it is?  If you unroll the loop does the same badness
happen?


On Mon, 2008-03-31 at 02:41 -0300, Joao Vicente Lima wrote:
> Hi,
> sorry bring this again ... but i hope use spawn in ompi someday :-D
> 
> The execution of spawn in this way works fine:
> MPI_Comm_spawn ("./spawn1", MPI_ARGV_NULL, 2, MPI_INFO_NULL, 0,
> MPI_COMM_SELF, &intercomm, MPI_ERRCODES_IGNORE);
> 
> but if this code go to a for I get a problem :
> for (i= 0; i < 2; i++)
> {
>   MPI_Comm_spawn ("./spawn1", MPI_ARGV_NULL, 1,
>   MPI_INFO_NULL, 0, MPI_COMM_SELF, &intercomm[i], MPI_ERRCODES_IGNORE);
> }
> 
> and the error is:
> spawning ...
> child!
> child!
> [localhost:03892] *** Process received signal ***
> [localhost:03892] Signal: Segmentation fault (11)
> [localhost:03892] Signal code: Address not mapped (1)
> [localhost:03892] Failing at address: 0xc8
> [localhost:03892] [ 0] /lib/libpthread.so.0 [0x2ac71ca8bed0]
> [localhost:03892] [ 1]
> /usr/local/mpi/ompi-svn/lib/libmpi.so.0(ompi_dpm_base_dyn_finalize+0xa3)
> [0x2ac71ba7448c]
> [localhost:03892] [ 2] /usr/local/mpi/ompi-svn/lib/libmpi.so.0 
> [0x2ac71b9decdf]
> [localhost:03892] [ 3] /usr/local/mpi/ompi-svn/lib/libmpi.so.0 
> [0x2ac71ba04765]
> [localhost:03892] [ 4]
> /usr/local/mpi/ompi-svn/lib/libmpi.so.0(PMPI_Finalize+0x71)
> [0x2ac71ba365c9]
> [localhost:03892] [ 5] ./spawn1(main+0xaa) [0x400ac2]
> [localhost:03892] [ 6] /lib/libc.so.6(__libc_start_main+0xf4) [0x2ac71ccb7b74]
> [localhost:03892] [ 7] ./spawn1 [0x400989]
> [localhost:03892] *** End of error message ***
> --------------------------------------------------------------------------
> mpirun noticed that process rank 0 with PID 3892 on node localhost
> exited on signal 11 (Segmentation fault).
> --------------------------------------------------------------------------
> 
> the attachments contain the ompi_info, config.log and program.
> 
> thanks for some check,
> Joao.
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users

Reply via email to