That helped. There was a missing check in the automatic recovery logic that
prevents it from starting up while the migration is going on. r24326 should fix
this bug. The segfault should have just been residual fallout from this bug.
Can you try the current trunk to confirm?
One other thing I no
Hi Josh.
As you say, the first problem was because of the name of the node. But the
second problem persist (the segmentation fault). As you ask, i'm sending you
the output of execute with the mca params that you pass me. At the end of
the file i put the output of the second terminal.
Best Regards
So I was not able to reproduce this issue.
A couple notes:
- You can see the node-to-process-rank mapping using the '-display-map'
command line option to mpirun. This will give you the node names that Open MPI
is using, and how it intends to layout the processes. You can use the
'-display-allo
On Jan 31, 2011, at 6:47 AM, Hugo Meyer wrote:
> Hi Joshua.
>
> I've tried the migration again, and i get the next (running process where
> mpirun is running):
>
> Terminal 1:
>
> [hmeyer@clus9 whoami]$ /home/hmeyer/desarrollo/ompi-code/binarios/bin/mpirun
> -np 2 -am ft-enable-cr-recovery -
Hi Joshua.
I've tried the migration again, and i get the next (running process where
mpirun is running):
Terminal 1:
*[hmeyer@clus9 whoami]$
/home/hmeyer/desarrollo/ompi-code/binarios/bin/mpirun -np 2 -am
ft-enable-cr-recovery --mca orte_base_help_aggregate 0 ./whoami 10 10*
*Antes de MPI_Init*