I think you mean:
mpirun -mca oob_tcp_if_exclude eth0
Actually, it doesn't work.But if I turn down eth0 on the virtual machines (
sudo ifdown eth0), all work like a charm !
Finally, I can start to do something more complicated with openmpi.
Thank you all very much !
Son.
Try running it with
mpirun -mca oob_tcp_if_exclude=10.0.2.15
That will tell OMPI to ignore the 10.0.2.15 interface
On Apr 29, 2010, at 7:36 AM, Nguyen Kim Son wrote:
> I think the problem reside the orted. As I tested mpirun in 2 virtual
> machines(fedora) in Windows, the communication betwee
I think the problem reside the orted. As I tested mpirun in 2 virtual
machines(fedora) in Windows, the communication between the two is through eth1
but not eth0. After lauching
ps aux | grep orted
the results is:
/usr/lib/openmpi/bin/orted --daemonize -mca ess env -mca orte_ess_jobid
-1233
from my PDA. No type good.
From: users-boun...@open-mpi.org
To: Open MPI Users
Sent: Wed Apr 28 04:08:23 2010
Subject: Re: [OMPI users] mpirun works locally but not through network
Thanks for your suggestion !
"$ mpirun --host loca
: Jeff Squyres
Objet: Re: [OMPI users] mpirun works locally but not through network
À: "Open MPI Users"
List-Post: users@lists.open-mpi.org
Date: Mardi 27 avril 2010, 7h46
I'm not intimately familiar with boost++ -- you might want to try the "hello
world" and "ring
I'm not intimately familiar with boost++ -- you might want to try the "hello
world" and "ring" example programs in the OMPI examples/ directory as a
baseline.
Additionally, try executing a non-MPI program such as "hostname" to verify that
your remote connectivity is working. For example:
$ mp
Hi all,
I'am writing a small program where the process of rank 0 sends "alo
alo" to the process of rank 1 and then process 1 will show this message on
screen. I am using boost++ library but result stays the same when I use the MPI
standard.
The program work locally ( that means: mpirun --host l