Hi Scott,
I believe this has been corrected on the trunk. This should hit the
1.1 release branch tonight.
Thanks,
Galen
On May 9, 2006, at 10:27 AM, Scott Weitzenkamp (sweitzen) wrote:
Pallas runs OK up to Alltoall test, then we get:
/data/software/qa/MPI/openmpi-1.1a2-rhel4-`uname -m`-
Hi Everyone,
This is going to be a long email, so please bear with me. The example programs
are obtained from lam-mpi.org site ...
My ultimate goal is to get Open MPI working with openIB stack. First, I had
installed lam-mpi , I know it doesn't have support for openIB but it's still
relevant
Has anyone been able to build openmpi-1.0.2 with the NAG fortran
compilers? I have been trying with no luck, here is what i have
tried and the resulting errors.
export FC=f95
./configure --prefix=/home/software/rhel4/openmpi-1.0.2/nag --with-
tm=/home/software/torque-2.0.0p8
make
./script
The latest release of openMPI is installed into /usr/local on the 05 May release of PK. Any nifty
examples showing usage would be welcome for a future release.
PK home: http://pareto.uab.es/mcreel/ParallelKnoppix/
Regards, Michael
On May 10, 2006, at 9:08 AM, Mahesh Barve wrote:
I am trying to build a cluster with 2 nodes each
being a dual processor xeon machine. I have installed
openMPI on one of the machines in /opt/open-mpi folder
and have kept the folder shared across the network
thru nfs mounted again in the same f
Quoting "Jeff Squyres (jsquyres)" :
> If you are looking for the path of least resistance, then going back to
> MPICH is probably your best bet (there is certainly merit in "it ain't
> broke, so don't fix it").
>
True - but where is your sense of adventure!!
> However, there may be a few other f
Hi,
I am trying to build a cluster with 2 nodes each
being a dual processor xeon machine. I have installed
openMPI on one of the machines in /opt/open-mpi folder
and have kept the folder shared across the network
thru nfs mounted again in the same folder.
Now I would like to run mpi code involv