Re: [OMPI users] Bug in oob_tcp_[in|ex]clude?

2007-12-18 Thread Jeff Squyres
On Dec 18, 2007, at 11:12 AM, Marco Sbrighi wrote: Assumedly this(these) statement(s) are in a config file that is being read by Open MPI, such as $HOME/.openmpi/mca-params.conf? I've tried many combinations: only in $HOME/.openmpi/mca-params.conf, only in command line and both; but none seems

Re: [OMPI users] Bug in oob_tcp_[in|ex]clude?

2007-12-18 Thread Marco Sbrighi
On Mon, 2007-12-17 at 20:58 -0500, Brian Dobbins wrote: > Hi Marco and Jeff, > > My own knowledge of OpenMPI's internals is limited, but I thought > I'd add my less-than-two-cents... > > > I've found only a way in order to have tcp connections > binded only to > > the et

Re: [OMPI users] Bug in oob_tcp_[in|ex]clude?

2007-12-18 Thread Marco Sbrighi
On Mon, 2007-12-17 at 17:19 -0500, Jeff Squyres wrote: > On Dec 17, 2007, at 8:35 AM, Marco Sbrighi wrote: > > > I'm using Open MPI 1.2.2 over OFED 1.2 on an 256 nodes, dual Opteron, > > dual core, Linux cluster. Of course, with Infiniband 4x interconnect. > > > > Each cluster node is equipped wit

Re: [OMPI users] Bug in oob_tcp_[in|ex]clude?

2007-12-17 Thread Brian Dobbins
Hi Marco and Jeff, My own knowledge of OpenMPI's internals is limited, but I thought I'd add my less-than-two-cents... > I've found only a way in order to have tcp connections binded only to > > the eth1 interface, using both the following MCA directives in the > > command line: > > > > mpirun

Re: [OMPI users] Bug in oob_tcp_[in|ex]clude?

2007-12-17 Thread Jeff Squyres
On Dec 17, 2007, at 8:35 AM, Marco Sbrighi wrote: I'm using Open MPI 1.2.2 over OFED 1.2 on an 256 nodes, dual Opteron, dual core, Linux cluster. Of course, with Infiniband 4x interconnect. Each cluster node is equipped with 4 (or more) ethernet interface, namely 2 gigabit ones plus 2 IPoIB. Th

[OMPI users] Bug in oob_tcp_[in|ex]clude?

2007-12-17 Thread Marco Sbrighi
Dear Open MPI developers, I'm using Open MPI 1.2.2 over OFED 1.2 on an 256 nodes, dual Opteron, dual core, Linux cluster. Of course, with Infiniband 4x interconnect. Each cluster node is equipped with 4 (or more) ethernet interface, namely 2 gigabit ones plus 2 IPoIB. The two gig are named et