Re: [OMPI users] OpenMPI-1.1 virtual memory overhead

2006-08-25 Thread Sven Stork
 Hello Miguel,

On Friday 25 August 2006 15:40, Miguel Figueiredo Mascarenhas Sousa Filipe 
wrote:
> Hi,
> 
> On 8/25/06, Sven Stork  wrote:
> >
> > Hello Miguel,
> >
> > this is caused by the shared memory mempool. Per default this shared
> > memory
> > mapping has a size of 512 MB. You can use the "mpool_sm_size" parameter to
> > reduce size e.g.
> >
> > mpirun -mca mpool_sm_size  ...
> 
> 
> 
> using
> mpirun -mca mpool_sm_size 0
> is acceptable ?
> to what will it fallback ? sockets? pipes? tcp? smoke signals?

0 will not work. But if you don't need shared memory communication you can 
disable the sm btl like:

mpirun -mca btl ^sm 

Thanks,
Sven

> thankyou very much by the fast answer.
> 
> Thanks,
> > Sven
> >
> > On Friday 25 August 2006 15:04, Miguel Figueiredo Mascarenhas Sousa Filipe
> > wrote:
> > > Hi there,
> > > I'm using openmpi-1.1 on a linux-amd64 machine and also a linux-32bit
> > x86
> > > chroot environment on that same machine.
> > > (distro is gentoo, compilers: gcc-4.1.1 and gcc-3.4.6)
> > >
> > > In both cases openmpi-1.1 shows a +/-400MB overhead in virtual memory
> > usage
> > > (virtual address space usage) for each MPI process.
> > >
> > > In my case this is quite troublesome because my application in 32bit
> > mode is
> > > counting on using the whole 4GB address space for the problem set size
> > and
> > > associated data.
> > > This means that I have a reduction in the size of the problems which it
> > can
> > > solve.
> > > (my aplication isn't 64bit safe yet, so I need to run in 32bit mode, and
> > use
> > > effectively the 4GB address space)
> > >
> > >
> > > Is there a way to tweak this overhead, by configuring openmpi to use
> > smaller
> > > buffers, or anything else ?
> > >
> > > I do not see this with mpich2.
> > >
> > > Best regards,
> > >
> > > --
> > > Miguel Sousa Filipe
> > >
> > ___
> > users mailing list
> > us...@open-mpi.org
> > http://www.open-mpi.org/mailman/listinfo.cgi/users
> >
> 
> 
> 
> -- 
> Miguel Sousa Filipe
> 


Re: [OMPI users] OpenMPI-1.1 virtual memory overhead

2006-08-25 Thread Miguel Figueiredo Mascarenhas Sousa Filipe

Hi,

On 8/25/06, Sven Stork  wrote:


Hello Miguel,

this is caused by the shared memory mempool. Per default this shared
memory
mapping has a size of 512 MB. You can use the "mpool_sm_size" parameter to
reduce size e.g.

mpirun -mca mpool_sm_size  ...




using
mpirun -mca mpool_sm_size 0
is acceptable ?
to what will it fallback ? sockets? pipes? tcp? smoke signals?

thankyou very much by the fast answer.

Thanks,

Sven

On Friday 25 August 2006 15:04, Miguel Figueiredo Mascarenhas Sousa Filipe
wrote:
> Hi there,
> I'm using openmpi-1.1 on a linux-amd64 machine and also a linux-32bit
x86
> chroot environment on that same machine.
> (distro is gentoo, compilers: gcc-4.1.1 and gcc-3.4.6)
>
> In both cases openmpi-1.1 shows a +/-400MB overhead in virtual memory
usage
> (virtual address space usage) for each MPI process.
>
> In my case this is quite troublesome because my application in 32bit
mode is
> counting on using the whole 4GB address space for the problem set size
and
> associated data.
> This means that I have a reduction in the size of the problems which it
can
> solve.
> (my aplication isn't 64bit safe yet, so I need to run in 32bit mode, and
use
> effectively the 4GB address space)
>
>
> Is there a way to tweak this overhead, by configuring openmpi to use
smaller
> buffers, or anything else ?
>
> I do not see this with mpich2.
>
> Best regards,
>
> --
> Miguel Sousa Filipe
>
___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users





--
Miguel Sousa Filipe


Re: [OMPI users] OpenMPI-1.1 virtual memory overhead

2006-08-25 Thread George Bosilca
I suspect this is the shared memory used to communicate between processes. 
Please run your application adding the flag "--mca btl tcp,self" to the 
mpirun command line (*before the application name). If the virtual memory 
usage goes down then the 400MB are definitively comming from the shared 
memory and there are ways to limit this amount 
(http://www.open-mpi.org/faq/?category=tuning provide a full range of 
options).


Otherwise ... we will have to find out where they come from differently.

  Thanks,
george.

On Fri, 25 Aug 2006, Miguel Figueiredo Mascarenhas Sousa Filipe wrote:


Hi there,
I'm using openmpi-1.1 on a linux-amd64 machine and also a linux-32bit x86
chroot environment on that same machine.
(distro is gentoo, compilers: gcc-4.1.1 and gcc-3.4.6)

In both cases openmpi-1.1 shows a +/-400MB overhead in virtual memory usage
(virtual address space usage) for each MPI process.

In my case this is quite troublesome because my application in 32bit mode is
counting on using the whole 4GB address space for the problem set size and
associated data.
This means that I have a reduction in the size of the problems which it can
solve.
(my aplication isn't 64bit safe yet, so I need to run in 32bit mode, and use
effectively the 4GB address space)


Is there a way to tweak this overhead, by configuring openmpi to use smaller
buffers, or anything else ?

I do not see this with mpich2.

Best regards,




"We must accept finite disappointment, but we must never lose infinite
hope."
  Martin Luther King



[OMPI users] OpenMPI-1.1 virtual memory overhead

2006-08-25 Thread Miguel Figueiredo Mascarenhas Sousa Filipe

Hi there,
I'm using openmpi-1.1 on a linux-amd64 machine and also a linux-32bit x86
chroot environment on that same machine.
(distro is gentoo, compilers: gcc-4.1.1 and gcc-3.4.6)

In both cases openmpi-1.1 shows a +/-400MB overhead in virtual memory usage
(virtual address space usage) for each MPI process.

In my case this is quite troublesome because my application in 32bit mode is
counting on using the whole 4GB address space for the problem set size and
associated data.
This means that I have a reduction in the size of the problems which it can
solve.
(my aplication isn't 64bit safe yet, so I need to run in 32bit mode, and use
effectively the 4GB address space)


Is there a way to tweak this overhead, by configuring openmpi to use smaller
buffers, or anything else ?

I do not see this with mpich2.

Best regards,

--
Miguel Sousa Filipe