It looks like this is a feature, not a bug. :-)
ompi_info is specifically clearing out the environment variables
corresponding to framework names (e.g., "btl"). If ompi_info didn't
do that, the OMPI core would only load the components that you have
specified in the environment and not sho
Hi all,
Found a bug maybe:
I'm setting the following environment variables:
OMPI_MCA_btl="tcp,self"
OMPI_MCA_btl_tcp_if_include="eth2"
OMPI_MCA_btl_tcp_if_exclude="eth0"
However, when I run 'ompi_info --param btl tcp' I see (among other
things):
MCA btl: parameter "btl" (current value: )
MCA btl
You are absolutely right; the MCA parameter names between the oob and
btl tcp modules were slightly different (btl_tcp_if_[in|ex]clude and
oob_tcp_[in|ex]clude). And the FAQ is wrong -- I just updated it;
sorry about that, and thanks for pointing it out!
I literally just fixed this on the
On May 24, 2007, at 10:38 AM, Adams, Samuel D Contr AFRL/HEDR wrote:
We recently got 33 new cluster nodes all of which have two onboard
GigE
nics. We also got two powerconnect 2748 48 port switches which
support
IEEE 802.3ad (link aggregation). I have configured the nodes to do
Ethernet bo
To add to what George said -- it looks like you have multiple
different implementations of MPI installed on your machine (LAM/MPI,
MPICH, MPICH2, ...?). Ensure that you completely compile and run
your application with *one* implementation of MPI (they are not
binary compatible).
Keep in
Hi,
For OpenMPI users struggling with cluster/grid setups in combination
with particular networks to use or avoid, it may be useful to know
that the documentation on http://www.open-mpi.org/faq/?category=tcp
regarding the oob module is currently misleading. The options that
actually implement th
There are 2 problems. First, it look like you're using LAM and not
Open MPI as there are some lam_ missing symbols. Second, please use
mpicxx to link your application as it will add all the missing
libraries.
george.
On May 24, 2007, at 10:38 AM, Jung, Soon-wook wrote:
Hello, users?
C
We recently got 33 new cluster nodes all of which have two onboard GigE
nics. We also got two powerconnect 2748 48 port switches which support
IEEE 802.3ad (link aggregation). I have configured the nodes to do
Ethernet bonding to aggregate the two nics in to one bonded device:
http://www.cyberci
Hello, users?
Currently, I'm trying to compile XOOPIC (2D plasma simulation program, MPI
parallel operation available) in conjunction with MPI.
I had no problem with XOOPIC compilation in single-machine operation mode;
however, when MPI mode is turned enabled, it generated about four or more
page