Re: [OMPI users] (no subject)

2013-11-02 Thread San B
Yes MM... But here a single node has 16cores not 64 cores. The 1st two jobs were with OMPI-1.4.5. 16 cores of single node - 3692.403 16 cores on two nodes (8 cores per node) - 12338.809 The 1st two jobs were with OMPI-1.6.5. 16 cores of single node - 3547.879 16 cores on

Re: [OMPI users] (no subject)

2013-10-29 Thread San B
with OpenMPI-1.6.5 and got executed in 5527.320 seconds on two nodes. Is this a performance gain with OMPI-1.6.5 over OMPI-1.4.5 or an issue with OPENMPI itself? On Tue, Oct 15, 2013 at 5:32 PM, San B <forum@gmail.com> wrote: > Hi, > > As per your instruction, I di

Re: [OMPI users] (no subject)

2013-10-15 Thread San B
. Thanks On Mon, Oct 7, 2013 at 12:15 PM, San B <forum@gmail.com> wrote: > Hi, > > I'm facing a performance issue with a scientific application(Fortran). > The issue is, it runs faster on single node but runs very slow on multiple > nodes. For example, a 16 core job on

[OMPI users] (no subject)

2013-10-07 Thread San B
Hi, I'm facing a performance issue with a scientific application(Fortran). The issue is, it runs faster on single node but runs very slow on multiple nodes. For example, a 16 core job on single node finishes in 1hr 2mins, but the same job on two nodes (i.e. 8 cores per node & remaining 8 cores

[OMPI users] OpenMPI-1.6.1: Warning - registering physical memry for mpi jobs

2012-09-05 Thread San B
OpenMPI-1.6.1 is installed on Rocks-5.5 Linux cluster with intel compilers and OFED-1.5.3. A sample Helloworld MPI program gives following warning message: /mpi/openmpi/1.6.1/intel/bin/mpirun -np 4 ./mpi -- WARNING: