Re: [OMPI users] OpenMPI-1.1 virtual memory overhead

2006-08-25 Thread Sven Stork
 Hello Miguel,

On Friday 25 August 2006 15:40, Miguel Figueiredo Mascarenhas Sousa Filipe 
wrote:
> Hi,
> 
> On 8/25/06, Sven Stork  wrote:
> >
> > Hello Miguel,
> >
> > this is caused by the shared memory mempool. Per default this shared
> > memory
> > mapping has a size of 512 MB. You can use the "mpool_sm_size" parameter to
> > reduce size e.g.
> >
> > mpirun -mca mpool_sm_size  ...
> 
> 
> 
> using
> mpirun -mca mpool_sm_size 0
> is acceptable ?
> to what will it fallback ? sockets? pipes? tcp? smoke signals?

0 will not work. But if you don't need shared memory communication you can 
disable the sm btl like:

mpirun -mca btl ^sm 

Thanks,
Sven

> thankyou very much by the fast answer.
> 
> Thanks,
> > Sven
> >
> > On Friday 25 August 2006 15:04, Miguel Figueiredo Mascarenhas Sousa Filipe
> > wrote:
> > > Hi there,
> > > I'm using openmpi-1.1 on a linux-amd64 machine and also a linux-32bit
> > x86
> > > chroot environment on that same machine.
> > > (distro is gentoo, compilers: gcc-4.1.1 and gcc-3.4.6)
> > >
> > > In both cases openmpi-1.1 shows a +/-400MB overhead in virtual memory
> > usage
> > > (virtual address space usage) for each MPI process.
> > >
> > > In my case this is quite troublesome because my application in 32bit
> > mode is
> > > counting on using the whole 4GB address space for the problem set size
> > and
> > > associated data.
> > > This means that I have a reduction in the size of the problems which it
> > can
> > > solve.
> > > (my aplication isn't 64bit safe yet, so I need to run in 32bit mode, and
> > use
> > > effectively the 4GB address space)
> > >
> > >
> > > Is there a way to tweak this overhead, by configuring openmpi to use
> > smaller
> > > buffers, or anything else ?
> > >
> > > I do not see this with mpich2.
> > >
> > > Best regards,
> > >
> > > --
> > > Miguel Sousa Filipe
> > >
> > ___
> > users mailing list
> > us...@open-mpi.org
> > http://www.open-mpi.org/mailman/listinfo.cgi/users
> >
> 
> 
> 
> -- 
> Miguel Sousa Filipe
> 


Re: [OMPI users] OpenMPI-1.1 virtual memory overhead

2006-08-25 Thread Miguel Figueiredo Mascarenhas Sousa Filipe

Hi,

On 8/25/06, Sven Stork  wrote:


Hello Miguel,

this is caused by the shared memory mempool. Per default this shared
memory
mapping has a size of 512 MB. You can use the "mpool_sm_size" parameter to
reduce size e.g.

mpirun -mca mpool_sm_size  ...




using
mpirun -mca mpool_sm_size 0
is acceptable ?
to what will it fallback ? sockets? pipes? tcp? smoke signals?

thankyou very much by the fast answer.

Thanks,

Sven

On Friday 25 August 2006 15:04, Miguel Figueiredo Mascarenhas Sousa Filipe
wrote:
> Hi there,
> I'm using openmpi-1.1 on a linux-amd64 machine and also a linux-32bit
x86
> chroot environment on that same machine.
> (distro is gentoo, compilers: gcc-4.1.1 and gcc-3.4.6)
>
> In both cases openmpi-1.1 shows a +/-400MB overhead in virtual memory
usage
> (virtual address space usage) for each MPI process.
>
> In my case this is quite troublesome because my application in 32bit
mode is
> counting on using the whole 4GB address space for the problem set size
and
> associated data.
> This means that I have a reduction in the size of the problems which it
can
> solve.
> (my aplication isn't 64bit safe yet, so I need to run in 32bit mode, and
use
> effectively the 4GB address space)
>
>
> Is there a way to tweak this overhead, by configuring openmpi to use
smaller
> buffers, or anything else ?
>
> I do not see this with mpich2.
>
> Best regards,
>
> --
> Miguel Sousa Filipe
>
___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users





--
Miguel Sousa Filipe


Re: [OMPI users] OpenMPI-1.1 virtual memory overhead

2006-08-25 Thread George Bosilca
I suspect this is the shared memory used to communicate between processes. 
Please run your application adding the flag "--mca btl tcp,self" to the 
mpirun command line (*before the application name). If the virtual memory 
usage goes down then the 400MB are definitively comming from the shared 
memory and there are ways to limit this amount 
(http://www.open-mpi.org/faq/?category=tuning provide a full range of 
options).


Otherwise ... we will have to find out where they come from differently.

  Thanks,
george.

On Fri, 25 Aug 2006, Miguel Figueiredo Mascarenhas Sousa Filipe wrote:


Hi there,
I'm using openmpi-1.1 on a linux-amd64 machine and also a linux-32bit x86
chroot environment on that same machine.
(distro is gentoo, compilers: gcc-4.1.1 and gcc-3.4.6)

In both cases openmpi-1.1 shows a +/-400MB overhead in virtual memory usage
(virtual address space usage) for each MPI process.

In my case this is quite troublesome because my application in 32bit mode is
counting on using the whole 4GB address space for the problem set size and
associated data.
This means that I have a reduction in the size of the problems which it can
solve.
(my aplication isn't 64bit safe yet, so I need to run in 32bit mode, and use
effectively the 4GB address space)


Is there a way to tweak this overhead, by configuring openmpi to use smaller
buffers, or anything else ?

I do not see this with mpich2.

Best regards,




"We must accept finite disappointment, but we must never lose infinite
hope."
  Martin Luther King



Re: [OMPI users] problem with ompi_info

2006-08-25 Thread George Bosilca
The directory when the libmpi.so is have to be added to the 
LD_LIBRARY_PATH, and of course the bin directory have to be added to the 
PATH. For more information about how and why please read 
http://www.open-mpi.org/faq/?category=running#adding-ompi-to-path


  Thanks,
george.


On Fri, 25 Aug 2006, Christine Kreuzer wrote:


Hi,
I tried to install  openmpi-1.1 on a AMD 64 OPTERON dual core (RHEL 4).
I got no error message from ./configure and make all install.
Not all tests passed (oob_test oob_test_self oob_test_packed and test_schema
were skipped)so I entered ompi_info in the bin directory and got the following
error message:

[root@dhcp76-200 openmpi-1.1]# ompi_info
ompi_info: error while loading shared libraries: libmpi.so.0: cannot open shared
object file: Nosuch file or directory

The library libmpi.so.0 exists in the lib directory and is linked to
libmpi.so.0.0.0.

Thanks for any help,
Christine

___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users



"We must accept finite disappointment, but we must never lose infinite
hope."
  Martin Luther King



[OMPI users] OpenMPI-1.1 virtual memory overhead

2006-08-25 Thread Miguel Figueiredo Mascarenhas Sousa Filipe

Hi there,
I'm using openmpi-1.1 on a linux-amd64 machine and also a linux-32bit x86
chroot environment on that same machine.
(distro is gentoo, compilers: gcc-4.1.1 and gcc-3.4.6)

In both cases openmpi-1.1 shows a +/-400MB overhead in virtual memory usage
(virtual address space usage) for each MPI process.

In my case this is quite troublesome because my application in 32bit mode is
counting on using the whole 4GB address space for the problem set size and
associated data.
This means that I have a reduction in the size of the problems which it can
solve.
(my aplication isn't 64bit safe yet, so I need to run in 32bit mode, and use
effectively the 4GB address space)


Is there a way to tweak this overhead, by configuring openmpi to use smaller
buffers, or anything else ?

I do not see this with mpich2.

Best regards,

--
Miguel Sousa Filipe


Re: [OMPI users] Jumbo frames

2006-08-25 Thread Caird, Andrew J
Massimiliano,

It should work automatically, but I have seen instances where switches
or Ethernet cards can't support the full 9000 bytes per frame, and we've
had to go as low as 6000 bytes to get consistent performance.  It seems
like everyone's interpretation of what the 9000 bytes is for is a little
different.

Does it work with the defaults 1500byte setting?  You might try
increasing in smaller steps to see where it stops working.

Good luck.
--andrew


> -Original Message-
> From: users-boun...@open-mpi.org 
> [mailto:users-boun...@open-mpi.org] On Behalf Of Massimiliano Fatica
> Sent: Friday, August 25, 2006 1:30 AM
> To: us...@open-mpi.org
> Subject: [OMPI users] Jumbo frames
> 
> Hi,
> I am trying to use Jumbo frames but mpirun will not start the job.
> I am using OpenMPI v1.1 shipped with the latest Rocks (4.2).
> Ifconfig is reporting that all the NIC on the cluster are 
> using an MTU of 9000 and the switch (HP Procurve) should be 
> able to use Jumbo frames.
> 
> Is there any special flag I need to pass to mpirun or a 
> configuration file I need to edit?
> 
> Thanks
> Massimiliano
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
> 



[OMPI users] problem with ompi_info

2006-08-25 Thread Christine Kreuzer
Hi,
I tried to install  openmpi-1.1 on a AMD 64 OPTERON dual core (RHEL 4).
I got no error message from ./configure and make all install.
Not all tests passed (oob_test oob_test_self oob_test_packed and test_schema
were skipped)so I entered ompi_info in the bin directory and got the following
error message:

[root@dhcp76-200 openmpi-1.1]# ompi_info
ompi_info: error while loading shared libraries: libmpi.so.0: cannot open shared
object file: Nosuch file or directory

The library libmpi.so.0 exists in the lib directory and is linked to
libmpi.so.0.0.0.

Thanks for any help,
Christine