Re: [OMPI users] Running problem

2009-09-02 Thread amjad ali
Hi Jakob, Thanks for reply. Please see below. On Tue, Sep 1, 2009 at 1:40 PM, J.S. van Bethlehem < j.s.van.bethle...@astro.rug.nl> wrote: > >From the look of it, this is not an OMPI problem, but a problem with > your paths. You need to make sure that libGLU.so.1 can be found by the > system at ru

Re: [OMPI users] Running problem

2009-09-01 Thread J.S. van Bethlehem
>From the look of it, this is not an OMPI problem, but a problem with your paths. You need to make sure that libGLU.so.1 can be found by the system at runtime. This is true for _all_ the systems that are in your machinefile. So make sure that on all systems the path to that library is in the LD_LIB

[OMPI users] Running problem

2009-09-01 Thread amjad ali
Hi all, A simple program at my 4-node ROCKS cluster runs fine with command: /opt/openmpi/bin/mpirun -np 4 -machinefile machines ./mpi-ring Another bigger programs runs fine on the head node only with command: cd ./sphere; /opt/openmpi/bin/mpirun -np 4 ../bin/flo2d But with the command: cd /sp

Re: [OMPI users] running problem on Dell blade server, confirm 2d21ce3ce8be64d8104b3ad71b8c59e2514a72eb

2009-04-29 Thread Jeff Squyres
On Apr 25, 2009, at 11:59 AM, Anton Starikov wrote: I can confirm that I have exactly the same problem, also on Dell system, even with latest openpmpi. Our system is: Dell M905 OpenSUSE 11.1 kernel: 2.6.27.21-0.1-default ofed-1.4-21.12 from SUSE repositories. OpenMPI-1.3.2 But what I can als

[OMPI users] running problem on Dell blade server, confirm 2d21ce3ce8be64d8104b3ad71b8c59e2514a72eb

2009-04-25 Thread Anton Starikov
I can confirm that I have exactly the same problem, also on Dell system, even with latest openpmpi. Our system is: Dell M905 OpenSUSE 11.1 kernel: 2.6.27.21-0.1-default ofed-1.4-21.12 from SUSE repositories. OpenMPI-1.3.2 But what I can also add, it not only affect openmpi, if this messages

Re: [OMPI users] running problem on Dell blade server, confirm 2d21ce3ce8be64d8104b3ad71b8c59e2514a72eb

2009-04-24 Thread Jeff Squyres
Per http://www.open-mpi.org/community/lists/announce/2009/03/0029.php, can you try upgrading to Open MPI v1.3.2? On Apr 24, 2009, at 5:21 AM, jan wrote: Dear Sir, I’m running a cluster with OpenMPI. $mpirun --mca mpi_show_mpi_alloc_mem_leaks 8 --mca mpi_show_handle_leaks 1 $HOME/test/cpi

[OMPI users] running problem on Dell blade server, confirm 2d21ce3ce8be64d8104b3ad71b8c59e2514a72eb

2009-04-24 Thread jan
Dear Sir, I’m running a cluster with OpenMPI. $mpirun --mca mpi_show_mpi_alloc_mem_leaks 8 --mca mpi_show_handle_leaks 1 $HOME/test/cpi I got the error message as job failed: Process 15 on node2 Process 6 on node1 Process 14 on node2 … … … Process 0 on node1 Process 10 on node

[OMPI users] running problem on Dell blade server, confirm 2d21ce3ce8be64d8104b3ad71b8c59e2514a72eb

2009-04-24 Thread jan
Dear Sir, I’m running a cluster with OpenMPI. $mpirun --mca mpi_show_mpi_alloc_mem_leaks 8 --mca mpi_show_handle_leaks 1 $HOME/test/cpi I got the error message as job failed: Process 15 on node2 Process 6 on node1 Process 14 on node2 … … … Process 0 on node1 Process 10 on nod