There are 2 sets of sockets: one for the oob layer and one for the
MPI layer (at least if TCP support is enabled). Therefore, in order
to achieve what you're looking for you should add to the command line
"--mca oob_tcp_if_include lo0 --mca btl_tcp_if_include lo0".
george.
On May 29, 200
We have run into the following problem:
- start up Open MPI application on a laptop
- disconnect from network
- application hangs
I believe that the problem is that all sockets created by Open MPI
are bound to the external network interface.
For example, when I start up a 2 process MPI job on
On May 29, 2007, at 12:25 PM, smai...@ksu.edu wrote:
I am doing a research on parallel computing on shared memory with
NUMA architecture. The system is a 4 node AMD opteron with each node
being a dual-core. I am testing an OpenMPI program with MPI-nodes <=
MAX cores available on system (in
Hi,
I am doing a research on parallel computing on shared memory with
NUMA architecture. The system is a 4 node AMD opteron with each node
being a dual-core. I am testing an OpenMPI program with MPI-nodes <=
MAX cores available on system (in my case 4*2=8). Can someone tell me
whether:
a) In s
hello,
recently my administrator made some changes on our cluster and now I
have a crash during MPI_Barrier:
[our-host:12566] *** Process received signal ***
[our-host:12566] Signal: Segmentation fault (11)
[our-host:12566] Signal code: Address not mapped (1)
[our-host:12566] Failing at addres
On Mon, 2007-05-28 at 22:42 -0400, Richard Graham wrote:
> Tahir,
> There are a variety of ways to create shared memory segments. I am
> not a
> Fortran expert, but do believe this is something that needs to be
> done from c/c++.
... or by using OpenMP, surely?