There is no high-speed network, only eth0. So MPI communication must be TCP
over eth0. I have tried forcing eth0 with --mca btl_tcp_if_include eth0, and
also by specifying the eth0 subnet. (Looking at the btl_tcp_component.c
source, I see that the subnet is just translated back into the
Use mpirun --mca btl_tcp_if_exclude eth0 should fix your problem. Otherwise
you can add it to your configuration file. Everything is extensively
described in the FAQ.
George.
On Jan 26, 2015 12:11 PM, "Kris Kersten" wrote:
> I'm working on an ethernet cluster that uses
I'm working on an ethernet cluster that uses virtual eth0:* interfaces on the
compute nodes for IPMI and system management. As described in Trac ticket
#3339 (https://svn.open-mpi.org/trac/ompi/ticket/3339 ), this setup confuses
the TCP BTL which can't differentiate between the physical and
Hi,
I am using OpenMPI 1.8.4 on a Ubuntu 14.04 machine and 5 Ubuntu 12.04
machines. I am using ssh to launch MPI jobs and I'm able to run simple
programs like 'mpirun -np 8 --host localhost,pachy1 hostname' and get
the expected output (pachy1 being an entry in my /etc/hosts file).
I