Dear users,
I am quite new to OpenMPI, I have compiled it on two nodes, each node with 8 
CPU cores. The two nodes are identical. The code I am using works in parallel 
across the 8 cores on a single node. However, whenever I try to run across both 
nodes, OpenMPI simply hangs. There is no output whatsoever, when I run it in 
background, outputting to a log file, the log file is always empty. The cores 
do not appear to be doing anything at all, either on the host node or on the 
remote node. This happens whether I am running my code, or even if I when I 
tell it to run a process that doesn't even exist, for instance

mpirun -np 4 -host node1,node2 random

Simply results in the terminal hanging, so all I can do is close the terminal 
and open up a new one.

mpirun -np 4 -host node1,node2 random >& log.log &

simply produces and empty log.log file

I am running Redhat Linux on the systems, and compiled OpenMPI with the Intel 
Compilers 10.1. As I've said, it works fine on one node. I have set up both 
nodes such that they can log into each other via ssh without the need for a 
password, and I have altered my .bashrc file so the PATH and LD_LIBRARY_PATH 
include the appropriate folders.
I have looked through the FAQ and mailing lists, but I was unable to find 
anything that really matched my problem. Any help would be greatly appreciated.

Sincerely,
Robertson Burgess
University of Newcastle

Reply via email to