I found a simple and quick way to bring mpirun in a deadlock state. It work everytime you do it. In fact it look like the mpirun do not realize that all the others are gone and he's left alone.

Step 1: start a 2 processes mpirun (use an application that take some time such as NetPIPE).

A simple lsof -p <mpirun pid> shows all the file descriptors used. One can notice the LISTEN socket as well as all the iof sockets.

orterun 26611 bosilca 3u IPv4 0x04843590 0t0 TCP *:49511 (LISTEN) orterun 26611 bosilca 4u IPv4 0x04841b20 0t0 TCP applebasket.cs.utk.edu:49511->applebasket.cs.utk.edu:49513 (ESTABLISHED) orterun 26611 bosilca 5u IPv4 0x03068cc0 0t0 TCP applebasket.cs.utk.edu:49511->applebasket.cs.utk.edu:49517 (ESTABLISHED) orterun 26611 bosilca 6u IPv4 0x03067b20 0t0 TCP applebasket.cs.utk.edu:49511->applebasket.cs.utk.edu:49519 (ESTABLISHED)

Step 2: Kill one of the 2 processes started at step 1.

Step 3: Wait forever ...

A quick ps show that the processes as well as their orted dissapeared. Only the mpirun is left. Another lsof on the mpirun process shows that all sockets have been closed with the exception of the LISTEN one.

orterun 26611 bosilca 3u IPv4 0x04843590 0t0 TCP *:49511 (LISTEN)

The stack of the mpirun look like this:

#0 opal_event_loop (flags=1) at ../../../ompi-trunk/opal/event/event.c:495 #1 0x003755c4 in opal_progress () at ../../ompi-trunk/opal/runtime/opal_progress.c:259 #2 0x00003830 in opal_condition_wait (c=0xa8e8, m=0xa8a8) at ../../../../ompi-trunk/opal/threads/condition.h:81 #3 0x0000322c in orterun (argc=10, argv=0xbffff3b8) at ../../../../ompi-trunk/orte/tools/orterun/orterun.c:447 #4 0x000029a4 in main (argc=10, argv=0xbffff3b8) at ../../../../ompi-trunk/orte/tools/orterun/main.c:13

I checked the FDSET on the mpirun process and it look correct. Only stdin and the LISTEN socket were inside.

  Hope this help,
    george.


"We must accept finite disappointment, but we must never lose infinite
hope."
                                  Martin Luther King

Reply via email to