Yep -- it's normal.
Those IP addresses are used for bootstrapping/startup, not for MPI traffic. In
particular, that "HNP URI" stuff is used by Open MPI's underlying run-time
environment. It's not used by the MPI layer at all.
On Feb 5, 2010, at 2:32 PM, Mike Hanby wrote:
> Howdy,
>
> When
Howdy,
When running a Gromacs job using OpenMPI 1.4.1 on Infiniband enabled nodes, I'm
seeing the following process listing:
\_ -bash /opt/gridengine/default/spool/compute-0-3/job_scripts/97037
\_ mpirun -np 4 mdrun_mpi -v -np 4 -s production-Npt-323K_4CPU -o
production-Npt-323K_4CPU -c pro
Correct, you don't need DAPL. Can you send all the information listed
here:
http://www.open-mpi.org/community/help/
On Sep 17, 2009, at 9:17 AM, Yann JOBIC wrote:
Hi,
I'm new to infiniband.
I installed the rdma_cm, rdma_ucm and ib_uverbs kernel modules.
When i'm running a ring test o
Hi,
I'm new to infiniband.
I installed the rdma_cm, rdma_ucm and ib_uverbs kernel modules.
When i'm running a ring test openmpi code, i've got :
[Lidia][0,1,1][btl_openib_endpoint.c:992:mca_btl_openib_endpoint_qp_init_query]
Set MTU to IBV value 4 (2048 bytes)
[Lidia][0,1,1][btl_openib_endpoint