Hi Jakob, Thanks for reply.
Please see below.
On Tue, Sep 1, 2009 at 1:40 PM, J.S. van Bethlehem <
j.s.van.bethle...@astro.rug.nl> wrote:
> >From the look of it, this is not an OMPI problem, but a problem with
> your paths. You need to make sure that libGLU.so.1 can be found by the
> system at ru
>From the look of it, this is not an OMPI problem, but a problem with
your paths. You need to make sure that libGLU.so.1 can be found by the
system at runtime. This is true for _all_ the systems that are in your
machinefile. So make sure that on all systems the path to that library
is in the LD_LIB
Hi all,
A simple program at my 4-node ROCKS cluster runs fine with command:
/opt/openmpi/bin/mpirun -np 4 -machinefile machines ./mpi-ring
Another bigger programs runs fine on the head node only with command:
cd ./sphere; /opt/openmpi/bin/mpirun -np 4 ../bin/flo2d
But with the command:
cd /sp
On Apr 25, 2009, at 11:59 AM, Anton Starikov wrote:
I can confirm that I have exactly the same problem, also on Dell
system, even with latest openpmpi.
Our system is:
Dell M905
OpenSUSE 11.1
kernel: 2.6.27.21-0.1-default
ofed-1.4-21.12 from SUSE repositories.
OpenMPI-1.3.2
But what I can als
I can confirm that I have exactly the same problem, also on Dell
system, even with latest openpmpi.
Our system is:
Dell M905
OpenSUSE 11.1
kernel: 2.6.27.21-0.1-default
ofed-1.4-21.12 from SUSE repositories.
OpenMPI-1.3.2
But what I can also add, it not only affect openmpi, if this messages
Per http://www.open-mpi.org/community/lists/announce/2009/03/0029.php,
can you try upgrading to Open MPI v1.3.2?
On Apr 24, 2009, at 5:21 AM, jan wrote:
Dear Sir,
I’m running a cluster with OpenMPI.
$mpirun --mca mpi_show_mpi_alloc_mem_leaks 8 --mca
mpi_show_handle_leaks 1 $HOME/test/cpi
Dear Sir,
I’m running a cluster with OpenMPI.
$mpirun --mca mpi_show_mpi_alloc_mem_leaks 8 --mca mpi_show_handle_leaks 1
$HOME/test/cpi
I got the error message as job failed:
Process 15 on node2
Process 6 on node1
Process 14 on node2
… … …
Process 0 on node1
Process 10 on node
Dear Sir,
I’m running a cluster with OpenMPI.
$mpirun --mca mpi_show_mpi_alloc_mem_leaks 8 --mca mpi_show_handle_leaks 1
$HOME/test/cpi
I got the error message as job failed:
Process 15 on node2
Process 6 on node1
Process 14 on node2
… … …
Process 0 on node1
Process 10 on nod