Re: [OMPI users] Open MPI compiler is slowed down by including unnecessary header files

2007-09-06 Thread Sven Stork
On Thursday 06 September 2007 02:29, Jeff Squyres wrote:
> Unfortunately,  is there for a specific reason.  The  
> MPI::SEEK_* names are problematic because they clash with the  
> equivalent C constants.  With the tricks that we have to play to make  
> those constants [at least mostly] work in the MPI C++ namespace, we  
> *must* include them.  The comment in mpicxx.h explains:
> 
> // We need to include the header files that define SEEK_* or use them
> // in ways that require them to be #defines so that if the user
> // includes them later, the double inclusion logic in the headers will
> // prevent trouble from occuring.
> // include so that we can smash SEEK_* properly
> #include 
> // include because on Linux, there is one place that assumes SEEK_* is
> // a #define (it's used in an enum).
> #include 
> 
> Additionally, much of the C++ MPI bindings are implemented as inline  
> functions, meaning that, yes, it does add lots of extra code to be  
> compiled.  Sadly, that's the price we pay for optimization (the fact  
> that they're inlined allows the cost to be zero -- we used to have a  
> paper on the LAM/MPI web site showing specific performance numbers to  
> back up this claim, but I can't find it anymore :-\ [the OMPI C++  
> bindings were derived from the LAM/MPI C++ bindings]).
> 
> You have two options for speeding up C++ builds:
> 
> 1. Disable OMPI's MPI C++ bindings altogether with the --disable-mpi- 
> cxx configure flag.  This means that  won't include any of  
> those extra C++ header files at all.
> 
> 2. If you're not using the MPI-2 C++ bindings for the IO  
> functionality, you can disable the SEEK_* macros (and therefore  
>  and ) with the --disable-mpi-cxx-seek configure  
> flag.

maybe this could be a third option:

3. just add -DOMPI_SKIP_MPICXX to you compilation flags to skip the inclusion 
of the mpicxx.h.

-- Sven 

> See "./configure --help" for a full list of configure flags that are  
> available.
> 
> 
> 
> 
> On Sep 4, 2007, at 4:22 PM, Thompson, Aidan P. wrote:
> 
> > This is more a comment that a question. I think the compile-time  
> > required
> > for large applications that use Open MPI is unnecessarily long. The
> > situation could be greatly improved by streamlining the number of C+ 
> > + header
> > files that are included. Currently, compiling LAMMPS  
> > (lammps.sandia.gov)
> > takes 61 seconds to compile with a dummy MPI library and 262  
> > seconds with
> > Open MPI, a 4x slowdown.
> >
> > I noticed that iostream.h is included by mpicxx.h, for no good  
> > reason. To
> > measure the cost of this, I compiled the follow source file 1)  
> > without any
> > include files 2) with mpi.h 3) with iostream.h and 4) with both:
> >
> > $ more foo.cpp
> > #ifdef FOO_MPI
> > #include "mpi.h"
> > #endif
> >
> > #ifdef FOO_IO
> > #include 
> > #endif
> >
> > void foo() {};
> >
> > $ time mpic++ -c foo.cpp
> > 0.04 real 0.02 user 0.02 sys
> > $ time mpic++ -DFOO_MPI -c foo.cpp
> > 0.58 real 0.47 user 0.07 sys
> > $ time mpic++ -DFOO_IO -c foo.cpp
> > 0.30 real 0.23 user 0.05 sys
> > $ time mpic++ -DFOO_IO -DFOO_MPI -c foo.cpp
> > 0.56 real 0.47 user 0.07 sys
> >
> > Including mpi.h adds about 0.5 seconds to the compile time and  
> > iostream
> > accounts for about half of that. With optimization, the effect is even
> > greater. When you have hundreds of source files, that really adds up.
> >
> > How about cleaning up your include system?
> >
> > Aidan
> >
> >
> >
> >
> >
> > -- 
> >   Aidan P. Thompson
> >   01435 Multiscale Dynamic Materials Modeling
> >   Sandia National Laboratories
> >   PO Box 5800, MS 1322 Phone: 505-844-9702
> >   Albuquerque, NM 87185FAX  : 505-845-7442
> >   mailto:atho...@sandia.gov
> >
> >
> >
> > ___
> > users mailing list
> > us...@open-mpi.org
> > http://www.open-mpi.org/mailman/listinfo.cgi/users
> 
> 
> -- 
> Jeff Squyres
> Cisco Systems
> 
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
> 


Re: [OMPI users] OpenMPI and Port Range

2007-08-31 Thread Sven Stork
On Friday 31 August 2007 09:07, Gleb Natapov wrote:
> On Fri, Aug 31, 2007 at 08:04:00AM +0100, Simon Hammond wrote:
> > On 31/08/2007, Lev Givon  wrote:
> > > Received from George Bosilca on Thu, Aug 30, 2007 at 07:42:52PM EDT:
> > > > I have a patch for this, but I never felt a real need for it, so I
> > > > never push it in the trunk. I'm not completely convinced that we need
> > > > it, except in some really strange situations (read grid). Why do you
> > > > need a port range ? For avoiding firewalls ?
> > 
> > We are planning on using OpenMPI as the basis for running MPI jobs
> > across a series of workstations overnight. The workstations are locked
> > down so that only a small number of ports are available for use. If we
> > try to use anything else its disaster.
> > 
> > Unfortunately this is really an organizational policy above anything
> > else and its very difficult to get it to change.
> > 
> > 
> As workaround you can write application that will bind to all ports that
> are not allowed to be used by MPI before running MPI job.

Another option could be (if that match your policy) to limit the dynamic port 
range that is used by your OS. By this all application (unless they ask for 
an specific port) will get ports in this limited port range. If so the 
following link might be interesting for you:

http://www.ncftp.com/ncftpd/doc/misc/ephemeral_ports.html

-- Sven 

> --
>   Gleb.
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
> 


Re: [OMPI users] Process termination problem

2007-08-20 Thread Sven Stork
instead of doing dirty with the library you could try to register a cleanup 
function with atexit.

Thanks,
  Sven 

On Friday 17 August 2007 19:59, Daniel Spångberg wrote:
> Dear George,
> 
> I think that the best way is to call MPI_Abort. However, this forces the  
> user to modify the code, which I already have suggested. But their  
> application is not calling exit directly, I merely wrote the simplest code  
> that demonstrates the problem. Their application is a Fortran program and  
> during file IO, when something bad happens, the fortran runtime (pgi)  
> calls exit (and sometimes _exit for some reason). The file IO is only done  
> in one process. I have told them to try to add ERR=linelo,END=lineno,  
> where the code at lineno calls MPI_Abort. This has not happened yet.  
> Nevertheless, openmpi does not terminate the application when one of  
> processes exits without MPI_Finalize, contrary to the content of mpirun  
> man-page. I have currently "solved" the problem by writing a .so that is  
> LD_PRELOAD:ed, checking whether MPI_Finalize is indeed called between  
> MPI_Init and exit/_exit. I'd rather not keep this "solution" for too long.  
> If it is indeed so that the mpirun man-page is wrong and the code right,  
> I'd rather push the proper error-handling solution.
> 
> Best regards
> Daniel Spångberg
> 
> 
> On Fri, 17 Aug 2007 18:25:17 +0200, George Bosilca   
> wrote:
> 
> > The MPI standard state that the correct way to abort/kill an MPI
> > application is using the MPI_Abort function. Except, if you're doing
> > some kind of fault tolerance stuff, there is no reason to end one of
> > your MPI processes via exit.
> >
> >Thanks,
> >  george.
> >
> > On Aug 16, 2007, at 12:04 PM, Daniel Spångberg wrote:
> >
> >> Dear Open-MPI user list members,
> >>
> >> I am currently having a user with an application where one of the
> >> MPI-processes die, but the openmpi-system does not kill the rest of
> >> the
> >> application.
> >>
> >> Since the mpirun man page states the following I would expect it to
> >> take
> >> care of killing the application if a process exits without calling
> >> MPI_Finalize:
> >>
> >> Process Termination / Signal Handling
> >> During  the run of an MPI application, if any rank dies
> >> abnormally
> >> (either exiting before invoking MPI_FINALIZE, or dying as the
> >> result of a signal), mpirun will print out an error message
> >> and
> >> kill the rest of the MPI application.
> >>
> >> The following test program demonstrates the behaviour (program
> >> hangs until
> >> it is killed by the user or batch system):
> >>
> >> #include 
> >> #include 
> >> #include 
> >> #include 
> >>
> >> #define RANK_DEATH 1
> >>
> >> int main(int argc, char **argv)
> >> {
> >>int rank;
> >>MPI_Init(&argc,&argv);
> >>MPI_Comm_rank(MPI_COMM_WORLD,&rank);
> >>
> >>sleep(10);
> >>if (rank==RANK_DEATH)
> >>  exit(1);
> >>sleep(10);
> >>MPI_Finalize();
> >>return 0;
> >> }
> >>
> >> I have tested this on openmpi 1.2.1 as well as the latest stable
> >> 1.2.3. I
> >> am on Linux x86_64.
> >>
> >> Is this a bug, or are there some flags I can use to force the
> >> mpirun (or
> >> orted, or...) to kill the whole MPI program when this happens?
> >>
> >> If one of the application processes die from a signal (I have
> >> tested SEGV
> >> and FPE) rather than just exiting the whole application is indeed
> >> killed.
> >>
> >> Best regards
> >> Daniel Spångberg
> >> ___
> >> users mailing list
> >> us...@open-mpi.org
> >> http://www.open-mpi.org/mailman/listinfo.cgi/users
> >
> >
> > ___
> > users mailing list
> > us...@open-mpi.org
> > http://www.open-mpi.org/mailman/listinfo.cgi/users
> >
> >
> 
> 
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
> 



Re: [OMPI users] openMPI on openBSD, anyone?

2007-08-16 Thread Sven Stork
On Wednesday 15 August 2007 18:13, Hor Meng Yoong wrote:
> Hi:
> 
>   I want to use a MPI-like solution on openBSD i386 PCs to communicate to a
> central server running Solaris OS. I am wondering has anyone use openMPI on
> openBSD. If so, which version? What kind porting issues encountered?

There were some build problems for the 1.1 and most likely they are still in 
the 1.2 series (I don't know if anybody tried to compile the 1.2 series on 
OpenBSD). You can find more details in the following ticket: 

https://svn.open-mpi.org/trac/ompi/ticket/393

regards,
 Sven

> Regards
> Hor Meng, Yoong
> 


[OMPI users] using google-perftools for hunting memory leaks

2007-08-06 Thread Sven Stork
Dear all,

while hunting for memory leaks I found the google performance tools quite 
useful. The included memory manager has the feature for checking for memory 
leak. Unlike other tools you can use this feature without any recompilation 
and still get some nice call graph locating the memory allocation root (see 
attachment). As it might also be interesting for other people I wanted to 
mention it. Here the link to the homepage :

http://goog-perftools.sourceforge.net

Cheers,
  Sven


pprof6154.0.pdf
Description: Adobe PDF document


Re: [OMPI users] Technical inquiry

2006-11-06 Thread Sven Stork
Hello Pablo.
On Saturday 04 November 2006 14:04, pgar...@eside.deusto.es wrote:
> 
> Hi, everydoby. Good afternoon.
> 
> I've just configured and installed the openmpi-1.1.2 on a kubuntu 
> GNU/linux, and I'm trying now to compile the hello.c example without 
> results.

As George said you are using with mpich. If you installed Open MPI as you said 
you also have to adapt the PATH and LD_LIBRARY_PATH environment variables 
(see http://www.open-mpi.org/faq/).

Regards,
 Sven

> > root@kubuntu:/home/livestrong/mpi/test# uname -a
> > Linux kubuntu 2.6.15-23-386 #1 PREEMPT Tue May 23 13:49:40 UTC 2006 
> > i686 GNU/Linux
> 
> Hello.c
> ---
> #include "/usr/lib/mpich-mpd/include/mpi.h"

See Georges mail.

> #include 
> int main (int argc, char** argv)
> {
> MPI_Init(&argc, &argv);
> printf("Hello word.\n");
> MPI_Finalize();
> return(0);
> }
> 
> The error that I'm finding is this:
> 
> root@kubuntu:/home/livestrong/mpi/prueba# mpirun -np 2 hello
> 0 - MPI_INIT : MPIRUN chose the wrong device ch_p4; program needs 
> device ch_p4mpd
> /usr/lib/mpich/bin/mpirun.ch_p4: line 243: 16625 Segmentation 
> 
fault  "/home/livestrong/mpi/prueba/hello" -p4pg 
"/home/livestrong/mpi/prueba/PI16545" -p4wd "/home/livestrong/mpi/prueba"
> 
> Does anybody know what it can be the problem?
> 
> Regards and thank you very much in advance.
> 
> Pablo.
> 
> PD: I send the ompi_info output and the config.log to you.
> 
> Besides
> 


Re: [OMPI users] OpenMPI-1.1 virtual memory overhead

2006-08-25 Thread Sven Stork
 Hello Miguel,

On Friday 25 August 2006 15:40, Miguel Figueiredo Mascarenhas Sousa Filipe 
wrote:
> Hi,
> 
> On 8/25/06, Sven Stork  wrote:
> >
> > Hello Miguel,
> >
> > this is caused by the shared memory mempool. Per default this shared
> > memory
> > mapping has a size of 512 MB. You can use the "mpool_sm_size" parameter to
> > reduce size e.g.
> >
> > mpirun -mca mpool_sm_size  ...
> 
> 
> 
> using
> mpirun -mca mpool_sm_size 0
> is acceptable ?
> to what will it fallback ? sockets? pipes? tcp? smoke signals?

0 will not work. But if you don't need shared memory communication you can 
disable the sm btl like:

mpirun -mca btl ^sm 

Thanks,
Sven

> thankyou very much by the fast answer.
> 
> Thanks,
> > Sven
> >
> > On Friday 25 August 2006 15:04, Miguel Figueiredo Mascarenhas Sousa Filipe
> > wrote:
> > > Hi there,
> > > I'm using openmpi-1.1 on a linux-amd64 machine and also a linux-32bit
> > x86
> > > chroot environment on that same machine.
> > > (distro is gentoo, compilers: gcc-4.1.1 and gcc-3.4.6)
> > >
> > > In both cases openmpi-1.1 shows a +/-400MB overhead in virtual memory
> > usage
> > > (virtual address space usage) for each MPI process.
> > >
> > > In my case this is quite troublesome because my application in 32bit
> > mode is
> > > counting on using the whole 4GB address space for the problem set size
> > and
> > > associated data.
> > > This means that I have a reduction in the size of the problems which it
> > can
> > > solve.
> > > (my aplication isn't 64bit safe yet, so I need to run in 32bit mode, and
> > use
> > > effectively the 4GB address space)
> > >
> > >
> > > Is there a way to tweak this overhead, by configuring openmpi to use
> > smaller
> > > buffers, or anything else ?
> > >
> > > I do not see this with mpich2.
> > >
> > > Best regards,
> > >
> > > --
> > > Miguel Sousa Filipe
> > >
> > ___
> > users mailing list
> > us...@open-mpi.org
> > http://www.open-mpi.org/mailman/listinfo.cgi/users
> >
> 
> 
> 
> -- 
> Miguel Sousa Filipe
> 


Re: [OMPI users] OpenMPI-1.1 virtual memory overhead

2006-08-25 Thread Sven Stork
Hello Miguel,

this is caused by the shared memory mempool. Per default this shared memory 
mapping has a size of 512 MB. You can use the "mpool_sm_size" parameter to 
reduce size e.g.

mpirun -mca mpool_sm_size  ...

Thanks,
Sven

On Friday 25 August 2006 15:04, Miguel Figueiredo Mascarenhas Sousa Filipe 
wrote:
> Hi there,
> I'm using openmpi-1.1 on a linux-amd64 machine and also a linux-32bit x86
> chroot environment on that same machine.
> (distro is gentoo, compilers: gcc-4.1.1 and gcc-3.4.6)
> 
> In both cases openmpi-1.1 shows a +/-400MB overhead in virtual memory usage
> (virtual address space usage) for each MPI process.
> 
> In my case this is quite troublesome because my application in 32bit mode is
> counting on using the whole 4GB address space for the problem set size and
> associated data.
> This means that I have a reduction in the size of the problems which it can
> solve.
> (my aplication isn't 64bit safe yet, so I need to run in 32bit mode, and use
> effectively the 4GB address space)
> 
> 
> Is there a way to tweak this overhead, by configuring openmpi to use smaller
> buffers, or anything else ?
> 
> I do not see this with mpich2.
> 
> Best regards,
> 
> -- 
> Miguel Sousa Filipe
> 


Re: [OMPI users] bug report: wrong reference in mpi.h to mpicxx.h

2006-07-19 Thread Sven Stork
Dear Paul, 

this previously posted "tutorial" how to build ParaView could maybe useful to 
you:

http://www.open-mpi.org/community/lists/users/2006/05/1246.php

regards,
Sven

On Wednesday 19 July 2006 14:57, Paul Heinzlreiter wrote:
> Hi all,
> 
> I'm not sure whether this bug has already been reported/fixed (maybe in
> the v1.1.1 pre-release):
> 
> I've compiled and installed Open MPI Version 1.1 (stable), which worked
> well.
> 
> for configuring OpenMPI I used the commandline
> 
> ./configure --prefix=/home/ph/local/openmpi --disable-mpi-f77
> --disable-mpi-f99
> 
> since i don't need fortran support.
> 
> Compiling and executing a simple MPI test program (in C) with Open MPI
> also worked well.
> 
> After that I tried to compile VTK (http://www.vtk.org) with MPI support
> using OpenMPI.
> 
> The compilation process issued the following error message:
> 
> /home/ph/local/openmpi/include/mpi.h:1757:33: ompi/mpi/cxx/mpicxx.h: No
> such file or directory
> 
> and indeed the location of the file mpicxx.h is
> /home/ph/local/openmpi/include/openmpi/ompi/mpi/cxx/mpicxx.h
> 
> and in mpi.h
> 
> it is stated
> 
> #if !defined(OMPI_SKIP_MPICXX) && OMPI_WANT_CXX_BINDINGS && !OMPI_BUILDING
> #if defined(__cplusplus) || defined(c_plusplus)
> #include "ompi/mpi/cxx/mpicxx.h"
> #endif
> #endif
> 
> so this would refer to the file
> 
> /home/ph/local/openmpi/include/ompi/mpi/cxx/mpicxx.h
> 
> as I see it.
> 
> so there is one subdirectory missing (openmpi) in the reference within
> mpi.h.
> 
> Regards,
> Paul Heinzlreiter
> 
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>