Re: [OMPI users] Infiniband Question

2010-02-05 Thread Jeff Squyres
Yep -- it's normal. Those IP addresses are used for bootstrapping/startup, not for MPI traffic. In particular, that "HNP URI" stuff is used by Open MPI's underlying run-time environment. It's not used by the MPI layer at all. On Feb 5, 2010, at 2:32 PM, Mike Hanby wrote: > Howdy, > > When

[OMPI users] Infiniband Question

2010-02-05 Thread Mike Hanby
Howdy, When running a Gromacs job using OpenMPI 1.4.1 on Infiniband enabled nodes, I'm seeing the following process listing: \_ -bash /opt/gridengine/default/spool/compute-0-3/job_scripts/97037 \_ mpirun -np 4 mdrun_mpi -v -np 4 -s production-Npt-323K_4CPU -o production-Npt-323K_4CPU -c pro

Re: [OMPI users] infiniband question

2009-09-17 Thread Jeff Squyres
Correct, you don't need DAPL. Can you send all the information listed here: http://www.open-mpi.org/community/help/ On Sep 17, 2009, at 9:17 AM, Yann JOBIC wrote: Hi, I'm new to infiniband. I installed the rdma_cm, rdma_ucm and ib_uverbs kernel modules. When i'm running a ring test o

[OMPI users] infiniband question

2009-09-17 Thread Yann JOBIC
Hi, I'm new to infiniband. I installed the rdma_cm, rdma_ucm and ib_uverbs kernel modules. When i'm running a ring test openmpi code, i've got : [Lidia][0,1,1][btl_openib_endpoint.c:992:mca_btl_openib_endpoint_qp_init_query] Set MTU to IBV value 4 (2048 bytes) [Lidia][0,1,1][btl_openib_endpoint