On Friday 19 November 2010 01:03:35 HeeJin Kim wrote:
...
> * mlx4: There is a mismatch between the kernel and the userspace
> libraries: Kernel does not support XRC. Exiting.*
...
> What I'm thinking is that the infiniband card is installed but it doesn't
> work in correct mode.
> My linux
On Nov 18, 2010, at 7:03 PM, HeeJin Kim wrote:
> I'm using Mellanox infiniband network card and trying to run it with openmpi.
> The problem is that I can connect and communicate between nodes, but I'm not
> sure whether it is in a correct state or not.
>
> I have two version of openmpi, one is
recommend you upgrade your Open MPI installation. v1.2.8 has
a lot of bugfixes relative to v1.2.2. Also, Open MPI 1.3 should be
available "next month"... so watch for an announcement on that front.
BTW OMPI 1.2.8 also will be available as part of OFED 1.4 that will be
released in end of
On Nov 20, 2008, at 4:16 PM, Michael Oevermann wrote:
with a blank behind /machine. Anyway, your suggested options -mca
btl openib,sm,self
did help!!!
The specific tip here is that on Linux, you want to use the openib
BTL, not the udapl BTL. Specifying "--mca btl openib,sm,self" means
BTW - after you get more comfortable with your new-to-you cluster, I
recommend you upgrade your Open MPI installation. v1.2.8 has
a lot of bugfixes relative to v1.2.2. Also, Open MPI 1.3 should be
available "next month"... so watch for an announcement on that front.
On Thu, Nov 20, 2008 at 3:16
Hi Ralph,
that was indeed a typo, the command is of course
/usr/mpi/gcc4/openmpi-1.2.2-1/bin/mpirun -np 4 -hostfile
/home/sysgen/infiniband-mpi-test/machine
/usr/mpi/gcc4/openmpi-1.2.2-1/tests/IMB-2.3/IMB-MPI1
with a blank behind /machine. Anyway, your suggested options -mca btl
Hi all,
I have "inherited" a small cluster with a head node and four compute
nodes which I have to administer. The nodes are connected via infiniband (OFED), but the head is not.
I am a complete novice to the infiniband stuff and here is my problem:
The infiniband configuration seems to be