Re: [OMPI users] TCP BTL and virtual network interfaces, bug #3339

2015-01-26 Thread Kris Kersten
There is no high-speed network, only eth0. So MPI communication must be TCP over eth0. I have tried forcing eth0 with --mca btl_tcp_if_include eth0, and also by specifying the eth0 subnet. (Looking at the btl_tcp_component.c source, I see that the subnet is just translated back into the

Re: [OMPI users] TCP BTL and virtual network interfaces, bug #3339

2015-01-26 Thread George Bosilca
Use mpirun --mca btl_tcp_if_exclude eth0 should fix your problem. Otherwise you can add it to your configuration file. Everything is extensively described in the FAQ. George. On Jan 26, 2015 12:11 PM, "Kris Kersten" wrote: > I'm working on an ethernet cluster that uses

[OMPI users] TCP BTL and virtual network interfaces, bug #3339

2015-01-26 Thread Kris Kersten
I'm working on an ethernet cluster that uses virtual eth0:* interfaces on the compute nodes for IPMI and system management. As described in Trac ticket #3339 (https://svn.open-mpi.org/trac/ompi/ticket/3339 ), this setup confuses the TCP BTL which can't differentiate between the physical and

[OMPI users] orted seg fault when using MPI_Comm_spawn on more than one host

2015-01-26 Thread Evan
Hi, I am using OpenMPI 1.8.4 on a Ubuntu 14.04 machine and 5 Ubuntu 12.04 machines. I am using ssh to launch MPI jobs and I'm able to run simple programs like 'mpirun -np 8 --host localhost,pachy1 hostname' and get the expected output (pachy1 being an entry in my /etc/hosts file). I