Re: [OMPI users] 1.3.1 -rf rankfile behaviour ??

2009-04-14 Thread Geoffroy Pignot
Hi,

I agree that my examples are not very clear. What I want to do is to launch
a multiexes application (masters-slaves) and benefit from the processor
affinity.
Could you show me how to convert this command , using -rf option (whatever
the affinity is)

mpirun -n 1 -host r001n001 master.x options1  : -n 1 -host r001n002 master.x
options2 : -n 1 -host r001n001 slave.x options3 : -n 1 -host r001n002
slave.x options4

Thanks for your help

Geoffroy





>
> Message: 2
> Date: Sun, 12 Apr 2009 18:26:35 +0300
> From: Lenny Verkhovsky 
> Subject: Re: [OMPI users] 1.3.1 -rf rankfile behaviour ??
> To: Open MPI Users 
> Message-ID:
><453d39990904120826t2e1d1d33l7bb1fe3de65b5...@mail.gmail.com>
> Content-Type: text/plain; charset="iso-8859-1"
>
> Hi,
>
> The first "crash" is OK, since your rankfile has ranks 0 and 1 defined,
> while n=1, which means only rank 0 is present and can be allocated.
>
> NP must be >= the largest rank in rankfile.
>
> What exactly are you trying to do ?
>
> I tried to recreate your seqv but all I got was
>
> ~/work/svn/ompi/trunk/build_x86-64/install/bin/mpirun --hostfile hostfile.0
> -rf rankfile.0 -n 1 hostname : -rf rankfile.1 -n 1 hostname
> [witch19:30798] mca: base: component_find: paffinity "mca_paffinity_linux"
> uses an MCA interface that is not recognized (component MCA v1.0.0 !=
> supported MCA v2.0.0) -- ignored
> --
> It looks like opal_init failed for some reason; your parallel process is
> likely to abort. There are many reasons that a parallel process can
> fail during opal_init; some of which are due to configuration or
> environment problems. This failure appears to be an internal failure;
> here's some additional information (which may only be relevant to an
> Open MPI developer):
>
>  opal_carto_base_select failed
>  --> Returned value -13 instead of OPAL_SUCCESS
> --
> [witch19:30798] [[INVALID],INVALID] ORTE_ERROR_LOG: Not found in file
> ../../orte/runtime/orte_init.c at line 78
> [witch19:30798] [[INVALID],INVALID] ORTE_ERROR_LOG: Not found in file
> ../../orte/orted/orted_main.c at line 344
> --
> A daemon (pid 11629) died unexpectedly with status 243 while attempting
> to launch so we are aborting.
>
> There may be more information reported by the environment (see above).
>
> This may be because the daemon was unable to find all the needed shared
> libraries on the remote node. You may set your LD_LIBRARY_PATH to have the
> location of the shared libraries on the remote nodes and this will
> automatically be forwarded to the remote nodes.
> --
> --
> mpirun noticed that the job aborted, but has no info as to the process
> that caused that situation.
> --
> mpirun: clean termination accomplished
>
>
> Lenny.
>
>
> On 4/10/09, Geoffroy Pignot  wrote:
> >
> > Hi ,
> >
> > I am currently testing the process affinity capabilities of openmpi and I
> > would like to know if the rankfile behaviour I will describe below is
> normal
> > or not ?
> >
> > cat hostfile.0
> > r011n002 slots=4
> > r011n003 slots=4
> >
> > cat rankfile.0
> > rank 0=r011n002 slot=0
> > rank 1=r011n003 slot=1
> >
> >
> >
> ##
> >
> > mpirun --hostfile hostfile.0 -rf rankfile.0 -n 2  hostname ### OK
> > r011n002
> > r011n003
> >
> >
> >
> ##
> > but
> > mpirun --hostfile hostfile.0 -rf rankfile.0 -n 1 hostname : -n 1 hostname
> > ### CRASHED
> > *
> >
>  --
> > Error, invalid rank (1) in the rankfile (rankfile.0)
> >
> --
> > [r011n002:25129] [[63976,0],0] ORTE_ERROR_LOG: Bad parameter in file
> > rmaps_rank_file.c at line 404
> > [r011n002:25129] [[63976,0],0] ORTE_ERROR_LOG: Bad parameter in file
> > base/rmaps_base_map_job.c at line 87
> > [r011n002:25129] [[63976,0],0] ORTE_ERROR_LOG: Bad parameter in file
> > base/plm_base_launch_support.c at line 77
> > [r011n002:25129] [[63976,0],0] ORTE_ERROR_LOG: Bad parameter in file
> > plm_rsh_module.c at line 985
> >
> --
> > A daemon (pid unknown) died unexpectedly on signal 1  while attempting to
> > launch so we are aborting.
> >
> > There may be more information reported by the environment (see above).
> >
> > This may be because the daemon was unable to find all the needed shared
> > libraries on the remote node. You may set your LD_LIBRARY_PATH

Re: [OMPI users] PGI Fortran pthread support

2009-04-14 Thread Åke Sandgren
On Mon, 2009-04-13 at 16:48 -0600, Orion Poplawski wrote:
> Seeing the following building openmpi 1.3.1 on CentOS 5.3 with PGI pgf90 
> 8.0-5 fortran compiler:
> checking for PTHREAD_MUTEX_ERRORCHECK_NP... yes
> checking for PTHREAD_MUTEX_ERRORCHECK... yes
> checking for working POSIX threads package... no

> Is there any way to get the PGI Fortran compiler to support threads for 
> openmpi?

I recommend adding the attached pthread.h into pgi's internal include
dir.
The pthread.h in newer distros is VERY VERY GCC-centric and when using
any other compiler it very often fails to do the "right" thing.

This pthread.h sets needed GCC-isms before parsing the real pthread.h.

At least we haven't had any problems with getting openmpi and pgi to
work correctly together since.
(I found this problem when building openmpi 1.2.something)

-- 
Ake Sandgren, HPC2N, Umea University, S-90187 Umea, Sweden
Internet: a...@hpc2n.umu.se   Phone: +46 90 7866134 Fax: +46 90 7866126
Mobile: +46 70 7716134 WWW: http://www.hpc2n.umu.se

#if ! defined(__builtin_expect)
# define __builtin_expect(expr, expected) (expr)
#endif

#if ! defined(__USE_GNU)
#define __USE_GNU
#define __PGI_USE_GNU
#endif

#if ! defined(__GNUC__)
#define __GNUC__ 2
#define __PGI_GNUC
#endif

#include_next

#if defined(__PGI_USE_GNU)
#undef __USE_GNU
#endif

#if defined(__PGI_GNUC)
#undef __GNUC__
#endif


Re: [OMPI users] Problem with running openMPI program

2009-04-14 Thread Ankush Kaul
Finally, after mentioning the hostfiles the cluster is working fine. We
downloaded few benchmarking softwares but i would like to know if there is
any GUI based benchmarking software so that its easier to demonstrate the
working of our cluster while displaying our cluster.
Regards
Ankush


[OMPI users] XLF and 1.3.1

2009-04-14 Thread Jean-Michel Beuken

Hello,

I'm trying to build 1.3.1 under  IBM Power5 + SLES 9.1 + XLF 9.1...

after some searches on FAQ and Google, my configure :

export CC="/opt/ibmcmp/vac/7.0/bin/xlc"
export CXX="/opt/ibmcmp/vacpp/7.0/bin/xlc++"
export CFLAGS="-O2 -q64 -qmaxmem=-1"
#
export F77="/opt/ibmcmp/xlf/9.1/bin/xlf"
export FFLAGS="-O2 -q64 -qmaxmem=-1"
export FC="/opt/ibmcmp/xlf/9.1/bin/xlf90"
export FCFLAGS="-O2 -q64 -qmaxmem=-1"
#
export LDFLAGS="-q64"
#
./configure --prefix=/usr/local/openmpi_1.3.1 \
  --disable-ipv6 \
  --enable-mpi-f77 --enable-mpi-f90 \
  --disable-mpi-profile \
  --without-xgrid \
  --enable-static --disable-shared \
  --disable-heterogeneous \
  --enable-contrib-no-build=libnbc,vt \
  --enable-mca-no-build=maffinity,btl-portals \
  --disable-mpi-cxx --disable-mpi-cxx-seek



there is a problem of "multiple definition"...

any advices ?

thanks

jmb

--
make[2]: Entering directory 
`/usr/local/src/openmpi-1.3.1/opal/tools/wrappers'
/bin/sh ../../../libtool --tag=CC   --mode=link 
/opt/ibmcmp/vac/7.0/bin/xlc  -DNDEBUG -O2 -q64 -qmaxmem=-1   
-export-dynamic -q64  -o opal_wrapper opal_wrapper.o 
../../../opal/libopen-pal.la -lnsl -lutil  -lpthread
libtool: link: /opt/ibmcmp/vac/7.0/bin/xlc -DNDEBUG -O2 -q64 -qmaxmem=-1 
-q64 -o opal_wrapper opal_wrapper.o -Wl,--export-dynamic  
../../../opal/.libs/libopen-pal.a -ldl -lnsl -lutil -lpthread
../../../opal/.libs/libopen-pal.a(libltdlc_la-lt__alloc.o)(.opd+0x18): 
In function `argz_next':

: multiple definition of `argz_next'
../../../opal/.libs/libopen-pal.a(libltdlc_la-ltdl.o)(.opd+0x528): first 
defined here
../../../opal/.libs/libopen-pal.a(libltdlc_la-lt__alloc.o)(.text+0x60): 
In function `.argz_next':

: multiple definition of `.argz_next'
../../../opal/.libs/libopen-pal.a(libltdlc_la-ltdl.o)(.text+0x4760): 
first defined here
../../../opal/.libs/libopen-pal.a(libltdlc_la-lt__alloc.o)(.opd+0x30): 
In function `__argz_next':

: multiple definition of `__argz_next'
../../../opal/.libs/libopen-pal.a(libltdlc_la-ltdl.o)(.opd+0x540): first 
defined here
../../../opal/.libs/libopen-pal.a(libltdlc_la-lt__alloc.o)(.text+0x80): 
In function `.__argz_next':

: multiple definition of `.__argz_next'
../../../opal/.libs/libopen-pal.a(libltdlc_la-ltdl.o)(.text+0x4780): 
first defined here
../../../opal/.libs/libopen-pal.a(libltdlc_la-preopen.o)(.opd+0x108): In 
function `argz_next':

: multiple definition of `argz_next'
../../../opal/.libs/libopen-pal.a(libltdlc_la-ltdl.o)(.opd+0x528): first 
defined here
../../../opal/.libs/libopen-pal.a(libltdlc_la-preopen.o)(.text+0x860): 
In function `.argz_next':

: multiple definition of `.argz_next'
../../../opal/.libs/libopen-pal.a(libltdlc_la-ltdl.o)(.text+0x4760): 
first defined here
/usr/bin/ld: Warning: size of symbol `.argz_next' changed from 20 in 
../../../opal/.libs/libopen-pal.a(libltdlc_la-preopen.o) to 60 in 
../../../opal/.libs/libopen-pal.a(libltdlc_la-preopen.o)
../../../opal/.libs/libopen-pal.a(libltdlc_la-preopen.o)(.opd+0x120): In 
function `__argz_next':

: multiple definition of `__argz_next'
../../../opal/.libs/libopen-pal.a(libltdlc_la-ltdl.o)(.opd+0x540): first 
defined here
../../../opal/.libs/libopen-pal.a(libltdlc_la-preopen.o)(.text+0x8a0): 
In function `.__argz_next':

: multiple definition of `.__argz_next'
../../../opal/.libs/libopen-pal.a(libltdlc_la-ltdl.o)(.text+0x4780): 
first defined here
../../../opal/.libs/libopen-pal.a(dlopen.o)(.opd+0x78): In function 
`argz_next':

: multiple definition of `argz_next'
../../../opal/.libs/libopen-pal.a(libltdlc_la-ltdl.o)(.opd+0x528): first 
defined here
../../../opal/.libs/libopen-pal.a(dlopen.o)(.text+0x240): In function 
`.argz_next':

: multiple definition of `.argz_next'
../../../opal/.libs/libopen-pal.a(libltdlc_la-ltdl.o)(.text+0x4760): 
first defined here
../../../opal/.libs/libopen-pal.a(dlopen.o)(.opd+0x90): In function 
`__argz_next':

: multiple definition of `__argz_next'
../../../opal/.libs/libopen-pal.a(libltdlc_la-ltdl.o)(.opd+0x540): first 
defined here
../../../opal/.libs/libopen-pal.a(dlopen.o)(.text+0x280): In function 
`.__argz_next':

: multiple definition of `.__argz_next'
../../../opal/.libs/libopen-pal.a(libltdlc_la-ltdl.o)(.text+0x4780): 
first defined here
../../../opal/.libs/libopen-pal.a(libltdlc_la-lt_error.o)(.opd+0x78): In 
function `argz_next':

: multiple definition of `argz_next'
../../../opal/.libs/libopen-pal.a(libltdlc_la-ltdl.o)(.opd+0x528): first 
defined here
../../../opal/.libs/libopen-pal.a(libltdlc_la-lt_error.o)(.text+0x260): 
In function `.argz_next':

: multiple definition of `.argz_next'
../../../opal/.libs/libopen-pal.a(libltdlc_la-ltdl.o)(.text+0x4760): 
first defined here
../../../opal/.libs/libopen-pal.a(libltdlc_la-lt_error.o)(.opd+0x90): In 
function `__argz_next':

: multiple definition of `__argz_next'
../../../opal/.libs/libopen-pal.a(libltdlc_la-ltdl.o)(.opd+0x540): first 
defined here
../../../opal/.libs/libopen-pal.a

Re: [OMPI users] XLF and 1.3.1

2009-04-14 Thread Nysal Jan
Can you try adding --disable-dlopen to the configure command line

--Nysal

On Tue, 2009-04-14 at 10:19 +0200, Jean-Michel Beuken wrote:
> Hello,
> 
> I'm trying to build 1.3.1 under  IBM Power5 + SLES 9.1 + XLF 9.1...
> 
> after some searches on FAQ and Google, my configure :
> 
> export CC="/opt/ibmcmp/vac/7.0/bin/xlc"
> export CXX="/opt/ibmcmp/vacpp/7.0/bin/xlc++"
> export CFLAGS="-O2 -q64 -qmaxmem=-1"
> #
> export F77="/opt/ibmcmp/xlf/9.1/bin/xlf"
> export FFLAGS="-O2 -q64 -qmaxmem=-1"
> export FC="/opt/ibmcmp/xlf/9.1/bin/xlf90"
> export FCFLAGS="-O2 -q64 -qmaxmem=-1"
> #
> export LDFLAGS="-q64"
> #
> ./configure --prefix=/usr/local/openmpi_1.3.1 \
>--disable-ipv6 \
>--enable-mpi-f77 --enable-mpi-f90 \
>--disable-mpi-profile \
>--without-xgrid \
>--enable-static --disable-shared \
>--disable-heterogeneous \
>--enable-contrib-no-build=libnbc,vt \
>--enable-mca-no-build=maffinity,btl-portals \
>--disable-mpi-cxx --disable-mpi-cxx-seek
> 
> 
> 
> there is a problem of "multiple definition"...
> 
> any advices ?
> 
> thanks
> 
> jmb
> 
> --
> make[2]: Entering directory 
> `/usr/local/src/openmpi-1.3.1/opal/tools/wrappers'
> /bin/sh ../../../libtool --tag=CC   --mode=link 
> /opt/ibmcmp/vac/7.0/bin/xlc  -DNDEBUG -O2 -q64 -qmaxmem=-1   
> -export-dynamic -q64  -o opal_wrapper opal_wrapper.o 
> ../../../opal/libopen-pal.la -lnsl -lutil  -lpthread
> libtool: link: /opt/ibmcmp/vac/7.0/bin/xlc -DNDEBUG -O2 -q64 -qmaxmem=-1 
> -q64 -o opal_wrapper opal_wrapper.o -Wl,--export-dynamic  
> ../../../opal/.libs/libopen-pal.a -ldl -lnsl -lutil -lpthread
> ../../../opal/.libs/libopen-pal.a(libltdlc_la-lt__alloc.o)(.opd+0x18): 
> In function `argz_next':
> : multiple definition of `argz_next'
> ../../../opal/.libs/libopen-pal.a(libltdlc_la-ltdl.o)(.opd+0x528): first 
> defined here
> ../../../opal/.libs/libopen-pal.a(libltdlc_la-lt__alloc.o)(.text+0x60): 
> In function `.argz_next':
> : multiple definition of `.argz_next'
> ../../../opal/.libs/libopen-pal.a(libltdlc_la-ltdl.o)(.text+0x4760): 
> first defined here
> ../../../opal/.libs/libopen-pal.a(libltdlc_la-lt__alloc.o)(.opd+0x30): 
> In function `__argz_next':
> : multiple definition of `__argz_next'
> ../../../opal/.libs/libopen-pal.a(libltdlc_la-ltdl.o)(.opd+0x540): first 
> defined here
> ../../../opal/.libs/libopen-pal.a(libltdlc_la-lt__alloc.o)(.text+0x80): 
> In function `.__argz_next':
> : multiple definition of `.__argz_next'
> ../../../opal/.libs/libopen-pal.a(libltdlc_la-ltdl.o)(.text+0x4780): 
> first defined here
> ../../../opal/.libs/libopen-pal.a(libltdlc_la-preopen.o)(.opd+0x108): In 
> function `argz_next':
> : multiple definition of `argz_next'
> ../../../opal/.libs/libopen-pal.a(libltdlc_la-ltdl.o)(.opd+0x528): first 
> defined here
> ../../../opal/.libs/libopen-pal.a(libltdlc_la-preopen.o)(.text+0x860): 
> In function `.argz_next':
> : multiple definition of `.argz_next'
> ../../../opal/.libs/libopen-pal.a(libltdlc_la-ltdl.o)(.text+0x4760): 
> first defined here
> /usr/bin/ld: Warning: size of symbol `.argz_next' changed from 20 in 
> ../../../opal/.libs/libopen-pal.a(libltdlc_la-preopen.o) to 60 in 
> ../../../opal/.libs/libopen-pal.a(libltdlc_la-preopen.o)
> ../../../opal/.libs/libopen-pal.a(libltdlc_la-preopen.o)(.opd+0x120): In 
> function `__argz_next':
> : multiple definition of `__argz_next'
> ../../../opal/.libs/libopen-pal.a(libltdlc_la-ltdl.o)(.opd+0x540): first 
> defined here
> ../../../opal/.libs/libopen-pal.a(libltdlc_la-preopen.o)(.text+0x8a0): 
> In function `.__argz_next':
> : multiple definition of `.__argz_next'
> ../../../opal/.libs/libopen-pal.a(libltdlc_la-ltdl.o)(.text+0x4780): 
> first defined here
> ../../../opal/.libs/libopen-pal.a(dlopen.o)(.opd+0x78): In function 
> `argz_next':
> : multiple definition of `argz_next'
> ../../../opal/.libs/libopen-pal.a(libltdlc_la-ltdl.o)(.opd+0x528): first 
> defined here
> ../../../opal/.libs/libopen-pal.a(dlopen.o)(.text+0x240): In function 
> `.argz_next':
> : multiple definition of `.argz_next'
> ../../../opal/.libs/libopen-pal.a(libltdlc_la-ltdl.o)(.text+0x4760): 
> first defined here
> ../../../opal/.libs/libopen-pal.a(dlopen.o)(.opd+0x90): In function 
> `__argz_next':
> : multiple definition of `__argz_next'
> ../../../opal/.libs/libopen-pal.a(libltdlc_la-ltdl.o)(.opd+0x540): first 
> defined here
> ../../../opal/.libs/libopen-pal.a(dlopen.o)(.text+0x280): In function 
> `.__argz_next':
> : multiple definition of `.__argz_next'
> ../../../opal/.libs/libopen-pal.a(libltdlc_la-ltdl.o)(.text+0x4780): 
> first defined here
> ../../../opal/.libs/libopen-pal.a(libltdlc_la-lt_error.o)(.opd+0x78): In 
> function `argz_next':
> : multiple definition of `argz_next'
> ../../../opal/.libs/libopen-pal.a(libltdlc_la-ltdl.o)(.opd+0x528): first 
> defined here
> ../../../opal/.libs/libopen-pal.a(libltdlc_la-lt_error.o)(.text+0x260): 
> In function `.argz_next':
> : multiple defin

Re: [OMPI users] Problem with running openMPI program

2009-04-14 Thread Jeff Squyres

On Apr 14, 2009, at 2:57 AM, Ankush Kaul wrote:

Finally, after mentioning the hostfiles the cluster is working fine.  
We downloaded few benchmarking softwares but i would like to know if  
there is any GUI based benchmarking software so that its easier to  
demonstrate the working of our cluster while displaying our cluster.



There are a few, but most dump out data that can either be directly  
plotted and/or parsed and then plotted using your favorite plotting  
software.


--
Jeff Squyres
Cisco Systems



Re: [OMPI users] 1.3.1 -rf rankfile behaviour ??

2009-04-14 Thread Ralph Castain
The rankfile cuts across the entire job - it isn't applied on an  
app_context basis. So the ranks in your rankfile must correspond to  
the eventual rank of each process in the cmd line.


Unfortunately, that means you have to count ranks. In your case, you  
only have four, so that makes life easier. Your rankfile would look  
something like this:


rank 0=r001n001 slot=0
rank 1=r001n002 slot=1
rank 2=r001n001 slot=1
rank 3=r001n002 slot=2

HTH
Ralph

On Apr 14, 2009, at 12:19 AM, Geoffroy Pignot wrote:


Hi,

I agree that my examples are not very clear. What I want to do is to  
launch a multiexes application (masters-slaves) and benefit from the  
processor affinity.
Could you show me how to convert this command , using -rf option  
(whatever the affinity is)


mpirun -n 1 -host r001n001 master.x options1  : -n 1 -host r001n002  
master.x options2 : -n 1 -host r001n001 slave.x options3 : -n 1 - 
host r001n002 slave.x options4


Thanks for your help

Geoffroy





Message: 2
Date: Sun, 12 Apr 2009 18:26:35 +0300
From: Lenny Verkhovsky 
Subject: Re: [OMPI users] 1.3.1 -rf rankfile behaviour ??
To: Open MPI Users 
Message-ID:
   <453d39990904120826t2e1d1d33l7bb1fe3de65b5...@mail.gmail.com>
Content-Type: text/plain; charset="iso-8859-1"

Hi,

The first "crash" is OK, since your rankfile has ranks 0 and 1  
defined,

while n=1, which means only rank 0 is present and can be allocated.

NP must be >= the largest rank in rankfile.

What exactly are you trying to do ?

I tried to recreate your seqv but all I got was

~/work/svn/ompi/trunk/build_x86-64/install/bin/mpirun --hostfile  
hostfile.0

-rf rankfile.0 -n 1 hostname : -rf rankfile.1 -n 1 hostname
[witch19:30798] mca: base: component_find: paffinity  
"mca_paffinity_linux"

uses an MCA interface that is not recognized (component MCA v1.0.0 !=
supported MCA v2.0.0) -- ignored
--
It looks like opal_init failed for some reason; your parallel  
process is

likely to abort. There are many reasons that a parallel process can
fail during opal_init; some of which are due to configuration or
environment problems. This failure appears to be an internal failure;
here's some additional information (which may only be relevant to an
Open MPI developer):

 opal_carto_base_select failed
 --> Returned value -13 instead of OPAL_SUCCESS
--
[witch19:30798] [[INVALID],INVALID] ORTE_ERROR_LOG: Not found in file
../../orte/runtime/orte_init.c at line 78
[witch19:30798] [[INVALID],INVALID] ORTE_ERROR_LOG: Not found in file
../../orte/orted/orted_main.c at line 344
--
A daemon (pid 11629) died unexpectedly with status 243 while  
attempting

to launch so we are aborting.

There may be more information reported by the environment (see above).

This may be because the daemon was unable to find all the needed  
shared
libraries on the remote node. You may set your LD_LIBRARY_PATH to  
have the

location of the shared libraries on the remote nodes and this will
automatically be forwarded to the remote nodes.
--
--
mpirun noticed that the job aborted, but has no info as to the process
that caused that situation.
--
mpirun: clean termination accomplished


Lenny.


On 4/10/09, Geoffroy Pignot  wrote:
>
> Hi ,
>
> I am currently testing the process affinity capabilities of  
openmpi and I
> would like to know if the rankfile behaviour I will describe below  
is normal

> or not ?
>
> cat hostfile.0
> r011n002 slots=4
> r011n003 slots=4
>
> cat rankfile.0
> rank 0=r011n002 slot=0
> rank 1=r011n003 slot=1
>
>
>  
##

>
> mpirun --hostfile hostfile.0 -rf rankfile.0 -n 2  hostname ### OK
> r011n002
> r011n003
>
>
>  
##

> but
> mpirun --hostfile hostfile.0 -rf rankfile.0 -n 1 hostname : -n 1  
hostname

> ### CRASHED
> *
>   
--

> Error, invalid rank (1) in the rankfile (rankfile.0)
>  
--

> [r011n002:25129] [[63976,0],0] ORTE_ERROR_LOG: Bad parameter in file
> rmaps_rank_file.c at line 404
> [r011n002:25129] [[63976,0],0] ORTE_ERROR_LOG: Bad parameter in file
> base/rmaps_base_map_job.c at line 87
> [r011n002:25129] [[63976,0],0] ORTE_ERROR_LOG: Bad parameter in file
> base/plm_base_launch_support.c at line 77
> [r011n002:25129] [[63976,0],0] ORTE_ERROR_LOG: Bad parameter in file
> plm_rsh_module.c at line 985
>  
---

Re: [OMPI users] 1.3.1 -rf rankfile behaviour ??

2009-04-14 Thread Geoffroy Pignot
I agree with you Ralph , and that 's what I expect from openmpi but my
second example shows that it's not working

cat hostfile.0
   r011n002 slots=4
   r011n003 slots=4

 cat rankfile.0
rank 0=r011n002 slot=0
rank 1=r011n003 slot=1

mpirun --hostfile hostfile.0 -rf rankfile.0 -n 1 hostname : -n 1 hostname
### CRASHED

> > Error, invalid rank (1) in the rankfile (rankfile.0)
> > >
> >
> --
> > > [r011n002:25129] [[63976,0],0] ORTE_ERROR_LOG: Bad parameter in file
> > > rmaps_rank_file.c at line 404
> > > [r011n002:25129] [[63976,0],0] ORTE_ERROR_LOG: Bad parameter in file
> > > base/rmaps_base_map_job.c at line 87
> > > [r011n002:25129] [[63976,0],0] ORTE_ERROR_LOG: Bad parameter in file
> > > base/plm_base_launch_support.c at line 77
> > > [r011n002:25129] [[63976,0],0] ORTE_ERROR_LOG: Bad parameter in file
> > > plm_rsh_module.c at line 985
> > >
> >
> --
> > > A daemon (pid unknown) died unexpectedly on signal 1  while
> > attempting to
> > > launch so we are aborting.
> > >
> > > There may be more information reported by the environment (see
> > above).
> > >
> > > This may be because the daemon was unable to find all the needed
> > shared
> > > libraries on the remote node. You may set your LD_LIBRARY_PATH to
> > have the
> > > location of the shared libraries on the remote nodes and this will
> > > automatically be forwarded to the remote nodes.
> > >
> >
> --
> > >
> >
> --
> > > orterun noticed that the job aborted, but has no info as to the
> > process
> > > that caused that situation.
> > >
> >
> --
> > > orterun: clean termination accomplished




>
> Message: 4
> Date: Tue, 14 Apr 2009 06:55:58 -0600
> From: Ralph Castain 
> Subject: Re: [OMPI users] 1.3.1 -rf rankfile behaviour ??
> To: Open MPI Users 
> Message-ID: 
> Content-Type: text/plain; charset="us-ascii"; Format="flowed";
>DelSp="yes"
>
> The rankfile cuts across the entire job - it isn't applied on an
> app_context basis. So the ranks in your rankfile must correspond to
> the eventual rank of each process in the cmd line.
>
> Unfortunately, that means you have to count ranks. In your case, you
> only have four, so that makes life easier. Your rankfile would look
> something like this:
>
> rank 0=r001n001 slot=0
> rank 1=r001n002 slot=1
> rank 2=r001n001 slot=1
> rank 3=r001n002 slot=2
>
> HTH
> Ralph
>
> On Apr 14, 2009, at 12:19 AM, Geoffroy Pignot wrote:
>
> > Hi,
> >
> > I agree that my examples are not very clear. What I want to do is to
> > launch a multiexes application (masters-slaves) and benefit from the
> > processor affinity.
> > Could you show me how to convert this command , using -rf option
> > (whatever the affinity is)
> >
> > mpirun -n 1 -host r001n001 master.x options1  : -n 1 -host r001n002
> > master.x options2 : -n 1 -host r001n001 slave.x options3 : -n 1 -
> > host r001n002 slave.x options4
> >
> > Thanks for your help
> >
> > Geoffroy
> >
> >
> >
> >
> >
> > Message: 2
> > Date: Sun, 12 Apr 2009 18:26:35 +0300
> > From: Lenny Verkhovsky 
> > Subject: Re: [OMPI users] 1.3.1 -rf rankfile behaviour ??
> > To: Open MPI Users 
> > Message-ID:
> ><453d39990904120826t2e1d1d33l7bb1fe3de65b5...@mail.gmail.com>
> > Content-Type: text/plain; charset="iso-8859-1"
> >
> > Hi,
> >
> > The first "crash" is OK, since your rankfile has ranks 0 and 1
> > defined,
> > while n=1, which means only rank 0 is present and can be allocated.
> >
> > NP must be >= the largest rank in rankfile.
> >
> > What exactly are you trying to do ?
> >
> > I tried to recreate your seqv but all I got was
> >
> > ~/work/svn/ompi/trunk/build_x86-64/install/bin/mpirun --hostfile
> > hostfile.0
> > -rf rankfile.0 -n 1 hostname : -rf rankfile.1 -n 1 hostname
> > [witch19:30798] mca: base: component_find: paffinity
> > "mca_paffinity_linux"
> > uses an MCA interface that is not recognized (component MCA v1.0.0 !=
> > supported MCA v2.0.0) -- ignored
> >
> --
> > It looks like opal_init failed for some reason; your parallel
> > process is
> > likely to abort. There are many reasons that a parallel process can
> > fail during opal_init; some of which are due to configuration or
> > environment problems. This failure appears to be an internal failure;
> > here's some additional information (which may only be relevant to an
> > Open MPI developer):
> >
> >  opal_carto_base_select failed
> >  --> Returned value -13 instead of OPAL_SUCCESS
> >
> --
> > [witch19:30798] [[INVALID],INVALID] ORTE_ERROR_LOG: Not found in file
> > ../../orte/runtime/

Re: [OMPI users] Problem with running openMPI program

2009-04-14 Thread Eugene Loh

Ankush Kaul wrote:

Finally, after mentioning the hostfiles the cluster is working fine. 
We downloaded few benchmarking softwares but i would like to know if 
there is any GUI based benchmarking software so that its easier to 
demonstrate the working of our cluster while displaying our cluster.


I'm confused what you're looking for here, but thought I'd venture a 
suggestion.


There are GUI-based performance analysis and tracing tools.  E.g., run a 
program, [[semi-]automatically] collect performance data, run a 
GUI-based analysis tool on the data, visualize what happened on your 
cluster.  Would this suit your purposes?


If so, there are a variety of tools out there you could try.  Some are 
platform-specific or cost money.  Some are widely/freely available.  
Examples of these tools include Intel Trace Analyzer, Jumpshot, Vampir, 
TAU, etc.  I do know that Sun Studio (Performance Analyzer) is available 
via free download on x86 and SPARC and Linux and Solaris and works with 
OMPI.  Possibly the same with Jumpshot.  VampirTrace instrumentation is 
already in OMPI, but then you need to figure out the analysis-tool 
part.  (I think the Vampir GUI tool requires a license, but I'm not 
sure.  Maybe you can convert to TAU, which is probably available for 
free download.)


Anyhow, I don't even know if that sort of thing fits your requirements.  
Just an idea.


Re: [OMPI users] 1.3.1 -rf rankfile behaviour ??

2009-04-14 Thread Ralph Castain

Ah now, I didn't say it -worked-, did I? :-)

Clearly a bug exists in the program. I'll try to take a look at it (if  
Lenny doesn't get to it first), but it won't be until later in the week.


On Apr 14, 2009, at 7:18 AM, Geoffroy Pignot wrote:

I agree with you Ralph , and that 's what I expect from openmpi but  
my second example shows that it's not working


cat hostfile.0
   r011n002 slots=4
   r011n003 slots=4

 cat rankfile.0
rank 0=r011n002 slot=0
rank 1=r011n003 slot=1

mpirun --hostfile hostfile.0 -rf rankfile.0 -n 1 hostname : -n 1  
hostname

### CRASHED

> > Error, invalid rank (1) in the rankfile (rankfile.0)
> >
>  
--
> > [r011n002:25129] [[63976,0],0] ORTE_ERROR_LOG: Bad parameter in  
file

> > rmaps_rank_file.c at line 404
> > [r011n002:25129] [[63976,0],0] ORTE_ERROR_LOG: Bad parameter in  
file

> > base/rmaps_base_map_job.c at line 87
> > [r011n002:25129] [[63976,0],0] ORTE_ERROR_LOG: Bad parameter in  
file

> > base/plm_base_launch_support.c at line 77
> > [r011n002:25129] [[63976,0],0] ORTE_ERROR_LOG: Bad parameter in  
file

> > plm_rsh_module.c at line 985
> >
>  
--

> > A daemon (pid unknown) died unexpectedly on signal 1  while
> attempting to
> > launch so we are aborting.
> >
> > There may be more information reported by the environment (see
> above).
> >
> > This may be because the daemon was unable to find all the needed
> shared
> > libraries on the remote node. You may set your LD_LIBRARY_PATH to
> have the
> > location of the shared libraries on the remote nodes and this will
> > automatically be forwarded to the remote nodes.
> >
>  
--

> >
>  
--

> > orterun noticed that the job aborted, but has no info as to the
> process
> > that caused that situation.
> >
>  
--

> > orterun: clean termination accomplished



Message: 4
Date: Tue, 14 Apr 2009 06:55:58 -0600
From: Ralph Castain 
Subject: Re: [OMPI users] 1.3.1 -rf rankfile behaviour ??
To: Open MPI Users 
Message-ID: 
Content-Type: text/plain; charset="us-ascii"; Format="flowed";
   DelSp="yes"

The rankfile cuts across the entire job - it isn't applied on an
app_context basis. So the ranks in your rankfile must correspond to
the eventual rank of each process in the cmd line.

Unfortunately, that means you have to count ranks. In your case, you
only have four, so that makes life easier. Your rankfile would look
something like this:

rank 0=r001n001 slot=0
rank 1=r001n002 slot=1
rank 2=r001n001 slot=1
rank 3=r001n002 slot=2

HTH
Ralph

On Apr 14, 2009, at 12:19 AM, Geoffroy Pignot wrote:

> Hi,
>
> I agree that my examples are not very clear. What I want to do is to
> launch a multiexes application (masters-slaves) and benefit from the
> processor affinity.
> Could you show me how to convert this command , using -rf option
> (whatever the affinity is)
>
> mpirun -n 1 -host r001n001 master.x options1  : -n 1 -host r001n002
> master.x options2 : -n 1 -host r001n001 slave.x options3 : -n 1 -
> host r001n002 slave.x options4
>
> Thanks for your help
>
> Geoffroy
>
>
>
>
>
> Message: 2
> Date: Sun, 12 Apr 2009 18:26:35 +0300
> From: Lenny Verkhovsky 
> Subject: Re: [OMPI users] 1.3.1 -rf rankfile behaviour ??
> To: Open MPI Users 
> Message-ID:
><453d39990904120826t2e1d1d33l7bb1fe3de65b5...@mail.gmail.com>
> Content-Type: text/plain; charset="iso-8859-1"
>
> Hi,
>
> The first "crash" is OK, since your rankfile has ranks 0 and 1
> defined,
> while n=1, which means only rank 0 is present and can be allocated.
>
> NP must be >= the largest rank in rankfile.
>
> What exactly are you trying to do ?
>
> I tried to recreate your seqv but all I got was
>
> ~/work/svn/ompi/trunk/build_x86-64/install/bin/mpirun --hostfile
> hostfile.0
> -rf rankfile.0 -n 1 hostname : -rf rankfile.1 -n 1 hostname
> [witch19:30798] mca: base: component_find: paffinity
> "mca_paffinity_linux"
> uses an MCA interface that is not recognized (component MCA  
v1.0.0 !=

> supported MCA v2.0.0) -- ignored
>  
--

> It looks like opal_init failed for some reason; your parallel
> process is
> likely to abort. There are many reasons that a parallel process can
> fail during opal_init; some of which are due to configuration or
> environment problems. This failure appears to be an internal  
failure;

> here's some additional information (which may only be relevant to an
> Open MPI developer):
>
>  opal_carto_base_select failed
>  --> Returned value -13 instead of OPAL_SUCCESS
>  
--
> [witch19:30798] [[INVALID],INVALID] ORTE_ERROR_LOG: Not fou

Re: [OMPI users] PGI Fortran pthread support

2009-04-14 Thread Prentice Bisbal
Orion,

I have no trouble getting thread support during configure with PGI 8.0-3

Are there any other compilers in your path before the PGI compilers?
Even if the PGI compilers come first, try specifying the PGI compilers
explicitly with these environment variables (bash syntax shown):

export CC=pgcc
export CXX=pgCC
export F77=pgf77
export FC=pgf90

also check the value of CPPFLAGS and LDFLAGS, and make sure they are
correct for your PGI compilers.

--
Prentice

Orion Poplawski wrote:
> Seeing the following building openmpi 1.3.1 on CentOS 5.3 with PGI pgf90
> 8.0-5 fortran compiler:
> 
> checking if C compiler and POSIX threads work with -Kthread... no
> checking if C compiler and POSIX threads work with -kthread... no
> checking if C compiler and POSIX threads work with -pthread... yes
> checking if C++ compiler and POSIX threads work with -Kthread... no
> checking if C++ compiler and POSIX threads work with -kthread... no
> checking if C++ compiler and POSIX threads work with -pthread... yes
> checking if F77 compiler and POSIX threads work with -Kthread... no
> checking if F77 compiler and POSIX threads work with -kthread... no
> checking if F77 compiler and POSIX threads work with -pthread... no
> checking if F77 compiler and POSIX threads work with -pthreads... no
> checking if F77 compiler and POSIX threads work with -mt... no
> checking if F77 compiler and POSIX threads work with -mthreads... no
> checking if F77 compiler and POSIX threads work with -lpthreads... no
> checking if F77 compiler and POSIX threads work with -llthread... no
> checking if F77 compiler and POSIX threads work with -lpthread... no
> checking for PTHREAD_MUTEX_ERRORCHECK_NP... yes
> checking for PTHREAD_MUTEX_ERRORCHECK... yes
> checking for working POSIX threads package... no
> checking if C compiler and Solaris threads work... no
> checking if C++ compiler and Solaris threads work... no
> checking if F77 compiler and Solaris threads work... no
> checking for working Solaris threads package... no
> checking for type of thread support... none found
> 



[OMPI users] openmpi 1.3.1 : mpirun status is 0 after receiving TERM signal

2009-04-14 Thread Geoffroy Pignot
Hi,

I am not sure it's a bug but I think we wait for something else when we kill
a proccess - by the way , the signal propagation works well.
I read an explanation on a previous thread - (
http://www.open-mpi.org/community/lists/users/2009/03/8514.php ) . .

It's not important but it could contribute to make openmpi better !!

Geoffroy


Re: [OMPI users] openmpi 1.3.1 : mpirun status is 0 after receivingTERM signal

2009-04-14 Thread Jeff Squyres

I believe that this is fixed in 1.3.2.

On Apr 14, 2009, at 10:32 AM, Geoffroy Pignot wrote:


Hi,

I am not sure it's a bug but I think we wait for something else when  
we kill a proccess - by the way , the signal propagation works well.
I read an explanation on a previous thread - ( http://www.open-mpi.org/community/lists/users/2009/03/8514.php 
 ) . .


It's not important but it could contribute to make openmpi better !!

Geoffroy
___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users



--
Jeff Squyres
Cisco Systems



Re: [OMPI users] shared libraries issue compiling1.3.1/intel10.1.022

2009-04-14 Thread Jeff Squyres

On Apr 13, 2009, at 12:07 PM, Francesco Pietra wrote:


I knew that but have considered it again. I wonder whether the info at
the end of this mail suggests how to operate from the viewpoint of
openmpi in compiling a code.

In trying to compile openmpi-1.3.1 on debian amd64 lenny, intels
10.1.022 do not see their librar libimf.so, which is on the unix path
as required by your reference. A mixed compilation gcc g++ ifort only
succeeded with a Tyan S2895, not with four-socket Supermicro boards,
which are of my need.



I'm not sure what you're saying here.  Compiling OMPI shouldn't be  
influenced on which hardware you have -- it should be a factor of your  
OS's and compilers...?



The problem was solved with gcc g++ gfortran. The openmpi-1.3.1
examples run correctly and Amber10 sander.MPI could be built plainly.

What remains unfulfilled - along similar lines - is the compilation of
Amber9 sander.MPI which I need. Installation of bison fulfilled the
request of yacc, and serial compilation passed.



I think you need to contact the Amber9 maintainers for help with this;  
we unfortunately have no visibility into their software, although I  
will say this:



gfortran -c -O0 -fno-second-underscore -march=nocona  -ffree-form  -o
evb_init.o _evb_init.f
Error: Can't open included file 'mpif-common.h'



This looks like a broken Open MPI installation and/or not using  
mpif77 / mpif90 to compile the application.  mpif-common.h should be  
installed properly such that it can be found via mpif77 / mpif90.


Contact the Amber maintainers and ask them why they're not using  
mpif77 / mpif90 to compile their application.  Or, if they're not  
interested, see if you can fix the Amber9 build process to use  
mpif77 / mpif90.  Without knowing anything about Amber9, that's my  
best guess as to how to make it compile properly.


--
Jeff Squyres
Cisco Systems



Re: [OMPI users] help: seg fault when freeing communicator

2009-04-14 Thread Jeff Squyres
In this case, I think we would need a little more information such as  
your application itself.  Is there any chance you can make a small  
reproducer of the application that we can easily study and reproduce  
the problem?


Have you tried running your application through a memory-checking  
debugger, perchance?



On Apr 13, 2009, at 10:01 AM, Graham Mark wrote:




This has me stumped. My code seg faults (sometimes) while
it's attempting to free a communicator--at least, that's what the
stack trace indicates, and that's what Totalview also shows.

This happens when I run the program with 27 processes. If I run with  
8,

the program finishes without error. (The program requires that the
number of
processes be a perfect cube.) It happens on two different machines.

The program reads input files and creates a 1-D circular MPI topology
in order to pass input data round robin to all processes. When that is
done, each process does some computation and writes out a file. Then
the program finishes. The seg fault occurs when the communicator
associated with the topoology is supposedly being freed as the program
ends.

The openmpi help web page lists information that should be included in
a help request. I'm attaching all of that that I could find: my
command to run the program, the stack trace, the outputs of
'ompi_info', 'limit', 'ibv_devinfo', 'ifconfig', 'uname' and values of
my
PATH and LD_LIBRARY_PATH.

Thanks for your help.

Graham Mark




*
** **
** WARNING:  This email contains an attachment of a very suspicious  
type.  **
** You are urged NOT to open this attachment unless you are  
absolutely **
** sure it is legitimate.  Opening this attachment may cause  
irreparable   **
** damage to your computer and your files.  If you have any  
questions  **
** about the validity of this message, PLEASE SEEK HELP BEFORE  
OPENING IT. **

** **
** This warning was added by the IU Computer Science Dept. mail  
scanner.   **

*






--
Jeff Squyres
Cisco Systems



Re: [OMPI users] PGI Fortran pthread support

2009-04-14 Thread Orion Poplawski

Prentice Bisbal wrote:

Orion,

I have no trouble getting thread support during configure with PGI 8.0-3


I'm mixing the pgf and gcc compilers which causes the trouble.

Here is the config.log entry for the F77 test:

configure:65969: checking if F77 compiler and POSIX threads work as is
configure:66066: gcc -DNDEBUG -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 
-fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 
-mtune=generic -fPIC -finline-functions -fno-strict-aliasing -I. -c 
conftest.c

conftest.c: In function 'pthreadtest_':
conftest.c:12: warning: null argument where non-null required (argument 3)
conftest.c:14: warning: null argument where non-null required (argument 1)
conftest.c:16: warning: null argument where non-null required (argument 1)
conftest.c:16: warning: null argument where non-null required (argument 3)
configure:66073: $? = 0
configure:66083: pgf95 -fastsse -fPIC conftestf.f conftest.o -o conftest 
-Wl,-z,noexecstack -lnsl -lutil  -lm

conftestf.f:
conftest.o:(.data.DW.ref.__gcc_personality_v0[DW.ref.__gcc_personality_v0]+0x0): 
undefined reference to `__gcc_personality_v0'


Looks like I need link to -lgcc_eh some how.

--
Orion Poplawski
Technical Manager 303-415-9701 x222
NWRA/CoRA DivisionFAX: 303-415-9702
3380 Mitchell Lane  or...@cora.nwra.com
Boulder, CO 80301  http://www.cora.nwra.com


Re: [OMPI users] PGI Fortran pthread support

2009-04-14 Thread Orion Poplawski

Orion Poplawski wrote:


Looks like I need link to -lgcc_eh some how.



./configure LIBS=-lgcc_eh ...

did the trick.

checking if F77 compiler and POSIX threads work as is... yes
checking if C compiler and POSIX threads work with -Kthread... no
checking if C compiler and POSIX threads work with -kthread... no
checking if C compiler and POSIX threads work with -pthread... yes
checking if C++ compiler and POSIX threads work with -Kthread... no
checking if C++ compiler and POSIX threads work with -kthread... no
checking if C++ compiler and POSIX threads work with -pthread... yes
checking for PTHREAD_MUTEX_ERRORCHECK_NP... yes
checking for PTHREAD_MUTEX_ERRORCHECK... yes
checking for working POSIX threads package... yes

--
Orion Poplawski
Technical Manager 303-415-9701 x222
NWRA/CoRA DivisionFAX: 303-415-9702
3380 Mitchell Lane  or...@cora.nwra.com
Boulder, CO 80301  http://www.cora.nwra.com


Re: [OMPI users] PGI Fortran pthread support

2009-04-14 Thread Gus Correa

Hi Orion, Prentice, list

I had a related problem recently,
building OpenMPI with gcc, g++ and pgf90 8.0-4 on CentOS 5.2.
Configure would complete, but not make.

See this thread for a workaround:

http://www.open-mpi.org/community/lists/users/2009/04/8724.php

Gus Correa
-
Gustavo Correa
Lamont-Doherty Earth Observatory - Columbia University
Palisades, NY, 10964-8000 - USA
-

Orion Poplawski wrote:

Prentice Bisbal wrote:

Orion,

I have no trouble getting thread support during configure with PGI 8.0-3


I'm mixing the pgf and gcc compilers which causes the trouble.

Here is the config.log entry for the F77 test:

configure:65969: checking if F77 compiler and POSIX threads work as is
configure:66066: gcc -DNDEBUG -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 
-fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 
-mtune=generic -fPIC -finline-functions -fno-strict-aliasing -I. -c 
conftest.c

conftest.c: In function 'pthreadtest_':
conftest.c:12: warning: null argument where non-null required (argument 3)
conftest.c:14: warning: null argument where non-null required (argument 1)
conftest.c:16: warning: null argument where non-null required (argument 1)
conftest.c:16: warning: null argument where non-null required (argument 3)
configure:66073: $? = 0
configure:66083: pgf95 -fastsse -fPIC conftestf.f conftest.o -o conftest 
-Wl,-z,noexecstack -lnsl -lutil  -lm

conftestf.f:
conftest.o:(.data.DW.ref.__gcc_personality_v0[DW.ref.__gcc_personality_v0]+0x0): 
undefined reference to `__gcc_personality_v0'


Looks like I need link to -lgcc_eh some how.





Re: [OMPI users] PGI Fortran pthread support

2009-04-14 Thread Orion Poplawski

Orion Poplawski wrote:

./configure LIBS=-lgcc_eh ...

did the trick.


Spoke too soon.  This leads to:

/bin/sh ../../../libtool   --mode=link pgf90 -I../../../ompi/include 
-I../../../ompi/include -I. -I. -I../../../ompi/mpi/f90  -fastsse -fPIC 
 -export-dynamic -Wl,-z,noexecstack  -o libmpi_f90.la -rpath 
/opt/openmpi/1.3.1-pgf-64/lib mpi.lo mpi_sizeof.lo 
mpi_comm_spawn_multiple_f90.lo mpi_testall_f90.lo mpi_testsome_f90.lo 
mpi_waitall_f90.lo mpi_waitsome_f90.lo mpi_wtick_f90.lo mpi_wtime_f90.lo 
  ../../../ompi/libmpi.la -lnsl -lutil -lgcc_eh -lm
libtool: link: pgf90 -shared  -fpic -Mnomain  .libs/mpi.o 
.libs/mpi_sizeof.o .libs/mpi_comm_spawn_multiple_f90.o 
.libs/mpi_testall_f90.o .libs/mpi_testsome_f90.o .libs/mpi_waitall_f90.o 
.libs/mpi_waitsome_f90.o .libs/mpi_wtick_f90.o .libs/mpi_wtime_f90.o 
-Wl,-rpath 
-Wl,/scratch/orion/redhat/openmpi-pgf-1.3.1/openmpi-1.3.1/ompi/.libs 
-Wl,-rpath 
-Wl,/scratch/orion/redhat/openmpi-pgf-1.3.1/openmpi-1.3.1/orte/.libs 
-Wl,-rpath 
-Wl,/scratch/orion/redhat/openmpi-pgf-1.3.1/openmpi-1.3.1/opal/.libs 
-Wl,-rpath -Wl,/opt/openmpi/1.3.1-pgf-64/lib 
-L/scratch/orion/redhat/openmpi-pgf-1.3.1/openmpi-1.3.1/orte/.libs 
-L/scratch/orion/redhat/openmpi-pgf-1.3.1/openmpi-1.3.1/opal/.libs 
../../../ompi/.libs/libmpi.so 
/scratch/orion/redhat/openmpi-pgf-1.3.1/openmpi-1.3.1/orte/.libs/libopen-rte.so 
/scratch/orion/redhat/openmpi-pgf-1.3.1/openmpi-1.3.1/opal/.libs/libopen-pal.so 
-ldl -lnsl -lutil -lgcc_eh -lm  -Wl,-z -Wl,noexecstack   -pthread 
-Wl,-soname -Wl,libmpi_f90.so.0 -o .libs/libmpi_f90.so.0.0.0

pgf90-Error-Unknown switch: -pthread

Looks like libtool is adding -pthread because it sees that you use 
-pthread to link C programs and assumes that all linkers use it.


--
Orion Poplawski
Technical Manager 303-415-9701 x222
NWRA/CoRA DivisionFAX: 303-415-9702
3380 Mitchell Lane  or...@cora.nwra.com
Boulder, CO 80301  http://www.cora.nwra.com


Re: [OMPI users] PGI Fortran pthread support

2009-04-14 Thread Orion Poplawski

Orion Poplawski wrote:
Looks like libtool is adding -pthread because it sees that you use 
-pthread to link C programs and assumes that all linkers use it.




Sorry, it inherits it from libmpi.la.  I hate libtool.

--
Orion Poplawski
Technical Manager 303-415-9701 x222
NWRA/CoRA DivisionFAX: 303-415-9702
3380 Mitchell Lane  or...@cora.nwra.com
Boulder, CO 80301  http://www.cora.nwra.com


Re: [OMPI users] PGI Fortran pthread support

2009-04-14 Thread Gus Correa

Hi Orion

That's exactly what happened to me.
Configured OK, failed on make because of "-pthread".
See my message from a minute ago, and this thread,
for a workaround suggested by Jeff Squyres,
of stripping off "-phtread" from the pgf90 flags:

http://www.open-mpi.org/community/lists/users/2009/04/8724.php

There is a little script in the above message to do the job.

I hope it helps.

Gus Correa
-
Gustavo Correa
Lamont-Doherty Earth Observatory - Columbia University
Palisades, NY, 10964-8000 - USA
-

Orion Poplawski wrote:

Orion Poplawski wrote:

./configure LIBS=-lgcc_eh ...

did the trick.


Spoke too soon.  This leads to:

/bin/sh ../../../libtool   --mode=link pgf90 -I../../../ompi/include 
-I../../../ompi/include -I. -I. -I../../../ompi/mpi/f90  -fastsse -fPIC 
 -export-dynamic -Wl,-z,noexecstack  -o libmpi_f90.la -rpath 
/opt/openmpi/1.3.1-pgf-64/lib mpi.lo mpi_sizeof.lo 
mpi_comm_spawn_multiple_f90.lo mpi_testall_f90.lo mpi_testsome_f90.lo 
mpi_waitall_f90.lo mpi_waitsome_f90.lo mpi_wtick_f90.lo mpi_wtime_f90.lo 
  ../../../ompi/libmpi.la -lnsl -lutil -lgcc_eh -lm
libtool: link: pgf90 -shared  -fpic -Mnomain  .libs/mpi.o 
.libs/mpi_sizeof.o .libs/mpi_comm_spawn_multiple_f90.o 
.libs/mpi_testall_f90.o .libs/mpi_testsome_f90.o .libs/mpi_waitall_f90.o 
.libs/mpi_waitsome_f90.o .libs/mpi_wtick_f90.o .libs/mpi_wtime_f90.o 
-Wl,-rpath 
-Wl,/scratch/orion/redhat/openmpi-pgf-1.3.1/openmpi-1.3.1/ompi/.libs 
-Wl,-rpath 
-Wl,/scratch/orion/redhat/openmpi-pgf-1.3.1/openmpi-1.3.1/orte/.libs 
-Wl,-rpath 
-Wl,/scratch/orion/redhat/openmpi-pgf-1.3.1/openmpi-1.3.1/opal/.libs 
-Wl,-rpath -Wl,/opt/openmpi/1.3.1-pgf-64/lib 
-L/scratch/orion/redhat/openmpi-pgf-1.3.1/openmpi-1.3.1/orte/.libs 
-L/scratch/orion/redhat/openmpi-pgf-1.3.1/openmpi-1.3.1/opal/.libs 
../../../ompi/.libs/libmpi.so 
/scratch/orion/redhat/openmpi-pgf-1.3.1/openmpi-1.3.1/orte/.libs/libopen-rte.so 
/scratch/orion/redhat/openmpi-pgf-1.3.1/openmpi-1.3.1/opal/.libs/libopen-pal.so 
-ldl -lnsl -lutil -lgcc_eh -lm  -Wl,-z -Wl,noexecstack   -pthread 
-Wl,-soname -Wl,libmpi_f90.so.0 -o .libs/libmpi_f90.so.0.0.0

pgf90-Error-Unknown switch: -pthread

Looks like libtool is adding -pthread because it sees that you use 
-pthread to link C programs and assumes that all linkers use it.






Re: [OMPI users] PGI Fortran pthread support

2009-04-14 Thread Orion Poplawski

Gus Correa wrote:

Hi Orion, Prentice, list

I had a related problem recently,
building OpenMPI with gcc, g++ and pgf90 8.0-4 on CentOS 5.2.
Configure would complete, but not make.

See this thread for a workaround:

http://www.open-mpi.org/community/lists/users/2009/04/8724.php

Gus Correa


Thanks, that explains the build failures.

--
Orion Poplawski
Technical Manager 303-415-9701 x222
NWRA/CoRA DivisionFAX: 303-415-9702
3380 Mitchell Lane  or...@cora.nwra.com
Boulder, CO 80301  http://www.cora.nwra.com


Re: [OMPI users] PGI Fortran pthread support

2009-04-14 Thread Jeff Squyres

On Apr 14, 2009, at 11:28 AM, Orion Poplawski wrote:


Sorry, it inherits it from libmpi.la.  I hate libtool.




To be fair, Libtool actually does a pretty darn good job at a very  
complex job.  :-)


These corner cases are pretty obscure (mixing one vendor's fortran  
compiler with another vendor's C compiler) and admittedly are not  
handled properly by Libtool.  But one has to ask: what exactly *is*  
the Right Thing to do in this scenario in a general case?  It's a  
pretty hard problem to solve...


FWIW, I did post this issue to the Libtool bug list.

--
Jeff Squyres
Cisco Systems



Re: [OMPI users] PGI Fortran pthread support

2009-04-14 Thread Orion Poplawski

Gus Correa wrote:

Hi Orion, Prentice, list

I had a related problem recently,
building OpenMPI with gcc, g++ and pgf90 8.0-4 on CentOS 5.2.
Configure would complete, but not make.



Easier solution is to set FC to "pgf90 -noswitcherror".  Does not appear 
to interfere with any configure tests.


--
Orion Poplawski
Technical Manager 303-415-9701 x222
NWRA/CoRA DivisionFAX: 303-415-9702
3380 Mitchell Lane  or...@cora.nwra.com
Boulder, CO 80301  http://www.cora.nwra.com


Re: [OMPI users] PGI Fortran pthread support

2009-04-14 Thread Gus Correa

Orion Poplawski wrote:

Gus Correa wrote:

Hi Orion, Prentice, list

I had a related problem recently,
building OpenMPI with gcc, g++ and pgf90 8.0-4 on CentOS 5.2.
Configure would complete, but not make.



Easier solution is to set FC to "pgf90 -noswitcherror".  Does not appear 
to interfere with any configure tests.




Thank you, Orion!

That will also solve the problem,
and is certainly neater than stripping off the "-pthread" flag
with a fake compiler script.

I didn't know about "-noswitcherror", so many switches there are,
not all of them very clear, and the number is growing ...

The only problem I can think of would be if you misspell another switch,
one that you really want to use.
Man pgf90 says it will accept the misspell/mistake
with a warning message.
So, to be safe with "-noswitcherror",
one has to grep the make log for PGI warning messages, I suppose,
which is no big deal anyway.

Gus Correa
-
Gustavo Correa
Lamont-Doherty Earth Observatory - Columbia University
Palisades, NY, 10964-8000 - USA
-


Re: [OMPI users] all2all algorithms

2009-04-14 Thread Jeff Squyres

George can speak more definitively about this.

In general, our "tuned" coll component (plugin) does exactly these  
kinds of determinations to figure out which algorithm to use at  
runtime.  Not only are communicator process counts involved, but also  
size of message is considered.  I count 5 different all2all algorithms  
in our tuned module (but George will have to speak about how each one  
is chosen).  U. Tennessee published some papers on their method; they  
basically hard-coded minimized selection tables based on oodles of  
runs and empirical data.


If you'd like to look at the code, it's here in the tree:


https://svn.open-mpi.org/source/xref/ompi_1.3/ompi/mca/coll/tuned/coll_tuned_alltoall.c
(v1.3 release branch)


https://svn.open-mpi.org/source/xref/ompi-trunk/ompi/mca/coll/tuned/coll_tuned_alltoall.c
(development trunk)

I don't think there's much difference between the two, but they can  
drift since v1.3 is the release branch and the trunk is active  
development.



On Apr 12, 2009, at 3:20 PM, Tom Rosmond wrote:

I am curious about the algorithm(s) used in the OpenMPI  
implementations

of the all2all and all2allv.  As many of you know, there are alternate
algorithms for all2all type operations, such as that of Plimpton, et  
al
(2006), that basically exchange latency costs for bandwidth costs,  
which

pays big dividends for large processor numbers, e.g. 100's or 1000's.
Does OpenMPI, or any other MPI distributions, test for processor count
and switch to such an all2all algorithm at some point?  I realize the
switchover point would be very much a function of the architecture,  
and

so could be a risky decision in some cases.  Nevertheless, has it been
considered?




--
Jeff Squyres
Cisco Systems



Re: [OMPI users] Debugging memory use of Open MPI

2009-04-14 Thread Shaun Jackman

Hi Eugene,

Eugene Loh wrote:
At 2500 bytes, all messages will presumably be sent "eagerly" -- without 
waiting for the receiver to indicate that it's ready to receive that 
particular message.  This would suggest congestion, if any, is on the 
receiver side.  Some kind of congestion could, I suppose, still occur 
and back up on the sender side.


Can anyone chime in as to what the message size limit is for an 
`eager' transmission?


On the other hand, I assume the memory imbalance we're talking about is 
rather severe.  Much more than 2500 bytes to be noticeable, I would 
think.  Is that really the situation you're imagining?


The memory imbalance is drastic. I'm expecting 2 GB of memory use per 
process. The heaving processes (13/16) use the expected amount of 
memory; the remainder (3/16) misbehaving processes use more than twice 
as much memory. The specifics vary from run to run of course. So, yes, 
there is gigs of unexpected memory use to track down.


There are tracing tools to look at this sort of thing.  The only one I 
have much familiarity with is Sun Studio / Sun HPC ClusterTools.  Free 
download, available on Solaris or Linux, SPARC or x64, plays with OMPI.  
You can see a timeline with message lines on it to give you an idea if 
messages are being received/completed long after they were sent.  
Another interesting view is constructing a plot vs time of how many 
messages are in-flight at any moment (including as a function of 
receiver).  Lots of similar tools out there... VampirTrace (tracing side 
only, need to analyze the data), Jumpshot, etc.  Again, though, there's 
a question in my mind if you're really backing up 1000s or more of 
messages.  (I'm assuming the memory imbalances are at least Mbytes.)


I'll check out Sun HPC ClusterTools. Thanks for the tip.

Assuming the problem is congestion and that messages are backing up, 
is there an accepted method of dealing with this situation? It seems 
to me the general approach would be


if (number of outstanding messages > high water mark)
wait until (number of outstanding messages < low water mark)

where I suppose the `number of outstanding messages' is defined as the 
number of messages that have been sent and not yet received by the 
other side. Is there a way to get this number from MPI without having 
to code it at the application level?


Thanks,
Shaun


Re: [OMPI users] Debugging memory use of Open MPI

2009-04-14 Thread Eugene Loh

Shaun Jackman wrote:


Eugene Loh wrote:

At 2500 bytes, all messages will presumably be sent "eagerly" -- 
without waiting for the receiver to indicate that it's ready to 
receive that particular message.  This would suggest congestion, if 
any, is on the receiver side.  Some kind of congestion could, I 
suppose, still occur and back up on the sender side.


Can anyone chime in as to what the message size limit is for an 
`eager' transmission?


ompi_info -a | grep eager
depends on the BTL.  E.g., sm=4K but tcp is 64K.  self is 128K.

On the other hand, I assume the memory imbalance we're talking about 
is rather severe.  Much more than 2500 bytes to be noticeable, I 
would think.  Is that really the situation you're imagining?


The memory imbalance is drastic. I'm expecting 2 GB of memory use per 
process. The heaving processes (13/16) use the expected amount of 
memory; the remainder (3/16) misbehaving processes use more than twice 
as much memory. The specifics vary from run to run of course. So, yes, 
there is gigs of unexpected memory use to track down.


Umm, how big of a message imbalance do you think you might have?  (The 
inflection in my voice doesn't come out well in e-mail.)  Anyhow, that 
sounds like, um, "lots" of 2500-byte messages.


Re: [OMPI users] Debugging memory use of Open MPI

2009-04-14 Thread Ralph Castain


On Apr 14, 2009, at 12:02 PM, Shaun Jackman wrote:


Hi Eugene,

Eugene Loh wrote:
At 2500 bytes, all messages will presumably be sent "eagerly" --  
without waiting for the receiver to indicate that it's ready to  
receive that particular message.  This would suggest congestion, if  
any, is on the receiver side.  Some kind of congestion could, I  
suppose, still occur and back up on the sender side.


Can anyone chime in as to what the message size limit is for an  
`eager' transmission?


On the other hand, I assume the memory imbalance we're talking  
about is rather severe.  Much more than 2500 bytes to be  
noticeable, I would think.  Is that really the situation you're  
imagining?


The memory imbalance is drastic. I'm expecting 2 GB of memory use  
per process. The heaving processes (13/16) use the expected amount  
of memory; the remainder (3/16) misbehaving processes use more than  
twice as much memory. The specifics vary from run to run of course.  
So, yes, there is gigs of unexpected memory use to track down.


There are tracing tools to look at this sort of thing.  The only  
one I have much familiarity with is Sun Studio / Sun HPC  
ClusterTools.  Free download, available on Solaris or Linux, SPARC  
or x64, plays with OMPI.  You can see a timeline with message lines  
on it to give you an idea if messages are being received/completed  
long after they were sent.  Another interesting view is  
constructing a plot vs time of how many messages are in-flight at  
any moment (including as a function of receiver).  Lots of similar  
tools out there... VampirTrace (tracing side only, need to analyze  
the data), Jumpshot, etc.  Again, though, there's a question in my  
mind if you're really backing up 1000s or more of messages.  (I'm  
assuming the memory imbalances are at least Mbytes.)


I'll check out Sun HPC ClusterTools. Thanks for the tip.

Assuming the problem is congestion and that messages are backing up,  
is there an accepted method of dealing with this situation? It seems  
to me the general approach would be


if (number of outstanding messages > high water mark)
   wait until (number of outstanding messages < low water mark)

where I suppose the `number of outstanding messages' is defined as  
the number of messages that have been sent and not yet received by  
the other side. Is there a way to get this number from MPI without  
having to code it at the application level?




It isn't quite that simple. The problem is that these are typically  
"unexpected" messages - i.e., some processes are running faster than  
this one, so this one keeps falling behind, which means it has to  
"stockpile" messages for later processing.


It is impossible to predict who is going to send the next unexpected  
message, so attempting to say "wait" means sending a broadcast to all  
procs - a very expensive operation, especially since it can be any  
number of procs that feel overloaded.


We had the same problem when working with collectives, where memory  
was being overwhelmed by stockpiled messages. The solution (available  
in the 1.3 series) in that case was to use the "sync" collective  
system. This monitors the number of times a collective is being  
executed that can cause this type of problem, and then inserts an  
MPI_Barrier to allow time for the processes to "drain" all pending  
messages. You can control how frequently this happens, and whether to  
barrier occurs before or after the specified number of operations.


If you are using collectives, or can reframe the algorithm so you do,  
you might give that  a try - it has solved similar problems here. If  
it helps, then you should "tune" it by increasing the provided number  
(thus decreasing the frequency of the inserted barrier) until you find  
a value that works for you - this will minimize performance impact on  
your job caused by the inserted barriers.


If you are not using collectives and/or cannot do so, then perhaps we  
need to consider a similar approach for simple send/recv operations.  
It would probably have to be done inside the MPI library, but may be  
hard to implement. The collective works because we know everyone has  
to be in it. That isn't true for send/recv, so the barrier approach  
won't work there - we would need some other method of stopping procs  
to allow things to catch up.


Not sure what that would be offhandbut perhaps some other wiser  
head will think of something!


HTH
Ralph



Thanks,
Shaun
___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users




Re: [OMPI users] Debugging memory use of Open MPI

2009-04-14 Thread Eugene Loh



On Apr 14, 2009, at 12:02 PM, Shaun Jackman wrote:


Assuming the problem is congestion and that messages are backing up, ...


I'd check this assumption first before going too far down that path.  
You might be able to instrument your code to spit out sends and 
receives.  VampirTrace (and PERUSE) instrumentation are already in OMPI, 
but any of these instrumentation approaches require that you then 
analyze the data you generate... to see how many messages get caught "in 
flight" at any time.  Again, there are the various tools I mentioned 
earlier.  If I understand correctly, the problem you're looking for is 
*millions* of messages backing up (in order to induce memory imbalances 
of Gbytes).  Should be easy to spot.


Maybe the real tool to use is some memory-tracing tool.  I don't know 
much about these.  Sun Studio?  Valgrind?  Sorry, but I'm really 
clueless about what tools to use there.


Re: [OMPI users] shared libraries issue compiling 1.3.1/intel 10.1.022

2009-04-14 Thread Francesco Pietra
mpirun -x LD_LIBRARY_PATH -host tya64 connectivity_c

complained about libimf.so (not found), just the same as without "-x
LD_LIBRARY_PATH" (tried to give the full path to the PATH with same
error)

while

# dpkg --search libimf.so
/opt/intel/fce/10.1.022/lib/libimf.so
/opt/intel/fce/10.1.022/lib/libimf.so

All above on a tyan S2895 with opteron (debian amd64 lenny). On the
same motherboard and OS, a cross compilation gcc g++ ifort was
successful as to the connectivity (and hello) tests.
-

On a Supermicro 4-socket opteron (same OS) even the cross compilation
failed. In contrast, a gcc g** gfortran compilation was succesfull as
to the connectivity (and hello) tests, however gfortran is not capable
to compile the faster code of the suite I am interested in (Amber10).
--
I came across what follows

"dynamic linkage is also a headache in that the mechanisms
used to find shared libraries during dynamic loading are not all that robust
on Linux systems running MPICH or other MPI packages
  for the compilers that use compiler shared
libraries (ifort, pathscale), we use LD_LIBRARY_PATH during
configuration to set an -rpath
linkage option, which is reliably available in the executable."

Does that mean adding as a flag

-rpath=LD_LIBRARY_PATH

when compiling both openmpi and amber? I can't find examples as to the
correct syntax.

thanks
francesco



On Fri, Apr 10, 2009 at 6:27 PM, Mostyn Lewis  wrote:
> If you want to find libimf.so, which is a shared INTEL library,
> pass the library path with a -x on mpirun
>
> mpirun  -x LD_LIBRARY_PATH 
>
> DM
>
>
> On Fri, 10 Apr 2009, Francesco Pietra wrote:
>
>> Hi Gus:
>>
>> If you feel that the observations below are not relevant to openmpi,
>> please disregard the message. You have already kindly devoted so much
>> time to my problems.
>>
>> The "limits.h" issue is solved with 10.1.022 intel compilers: as I
>> felt, the problem was with the pre-10.1.021 version of the intel C++
>> and ifort compilers, a subtle bug observed also by gentoo people (web
>> intel). There remains an orted issue.
>>
>> The openmpi 1.3.1 installation was able to compile connectivity_c.c
>> and hello_c.c, though, running mpirun (output below between ===):
>>
>> =
>> /usr/local/bin/mpirun -host -n 4 connectivity_c 2>&1 | tee
>> connectivity.out
>> /usr/local/bin/orted: error while loading shared libraries: libimf.so:
>> cannot open shared object file: No such file or directory
>> --
>> A daemon (pid 8472) died unexpectedly with status 127 while attempting
>> to launch so we are aborting.
>>
>> There may be more information reported by the environment (see above).
>>
>> This may be because the daemon was unable to find all the needed shared
>> libraries on the remote node. You may set your LD_LIBRARY_PATH to have the
>> location of the shared libraries on the remote nodes and this will
>> automatically be forwarded to the remote nodes.
>> --
>> --
>> mpirun noticed that the job aborted, but has no info as to the process
>> that caused that situation.
>> --
>> mpirun: clean termination accomplished
>> =
>>
>> At this point, Amber10 serial compiled nicely (all intel, like
>> openmpi), but parallel compilation, as expected, returned the same
>> problem above:
>>
>> =
>> export TESTsander=/usr/local/amber10/exe/sander.MPI; make
>> test.sander.BASIC
>> make[1]: Entering directory `/usr/local/amber10/test'
>> cd cytosine && ./Run.cytosine
>> orted: error while loading shared libraries: libimf.so: cannot open
>> shared object file: No such file or directory
>> --
>> A daemon (pid 8371) died unexpectedly with status 127 while attempting
>> to launch so we are aborting.
>>
>> There may be more information reported by the environment (see above).
>>
>> This may be because the daemon was unable to find all the needed shared
>> libraries on the remote node. You may set your LD_LIBRARY_PATH to have the
>> location of the shared libraries on the remote nodes and this will
>> automatically be forwarded to the remote nodes.
>> --
>> --
>> mpirun noticed that the job aborted, but has no info as to the process
>> that caused that situation.
>> --
>> mpirun: clean termination accomplished
>>
>>  ./Run.cytosine:  Program error
>> make[1]: *** [test.sander.BASIC] Error 1
>> make[1]: Leaving directory `/usr/local/amber10/test'
>> make: *** [test.sander.BASIC.MP

Re: [OMPI users] XLF and 1.3.1

2009-04-14 Thread Jean-Michel Beuken

ok !  thank you Nysal

Can you try adding --disable-dlopen to the configure command line

--Nysal

On Tue, 2009-04-14 at 10:19 +0200, Jean-Michel Beuken wrote:

there is a problem of "multiple definition"...

any advices ?


it's resolved the problem of  "multiple definition"...

regards

jmb


Re: [OMPI users] shared libraries issue compiling 1.3.1/intel 10.1.022

2009-04-14 Thread Ralph Castain
The -x option only applies to your application processes - it is never  
applied to the OMPI processes such as the OMPI daemons (orteds). If  
you built OMPI with the Intel library, then trying to pass the path to  
libimf via -x will fail - your application processes will get that  
library path, but not the orted.


A clearer error message has been added to 1.3.2.

What you need to do here is add the path to your intel libraries to  
LD_LIBRARY_PATH in the .cshrc (or whatever shell you are using) on  
your compute nodes. Alternatively, if the libraries are in the same  
place on the node where mpirun is executed, you can simply set  
LD_LIBRARY_PATH in your .cshrc there and it will be propagated.


Ralph

On Apr 14, 2009, at 12:53 PM, Francesco Pietra wrote:


mpirun -x LD_LIBRARY_PATH -host tya64 connectivity_c

complained about libimf.so (not found), just the same as without "-x
LD_LIBRARY_PATH" (tried to give the full path to the PATH with same
error)

while

# dpkg --search libimf.so
/opt/intel/fce/10.1.022/lib/libimf.so
/opt/intel/fce/10.1.022/lib/libimf.so

All above on a tyan S2895 with opteron (debian amd64 lenny). On the
same motherboard and OS, a cross compilation gcc g++ ifort was
successful as to the connectivity (and hello) tests.
-

On a Supermicro 4-socket opteron (same OS) even the cross compilation
failed. In contrast, a gcc g** gfortran compilation was succesfull as
to the connectivity (and hello) tests, however gfortran is not capable
to compile the faster code of the suite I am interested in (Amber10).
--
I came across what follows

"dynamic linkage is also a headache in that the mechanisms
used to find shared libraries during dynamic loading are not all  
that robust

on Linux systems running MPICH or other MPI packages
  for the compilers that use compiler shared
libraries (ifort, pathscale), we use LD_LIBRARY_PATH during
configuration to set an -rpath
linkage option, which is reliably available in the executable."

Does that mean adding as a flag

-rpath=LD_LIBRARY_PATH

when compiling both openmpi and amber? I can't find examples as to the
correct syntax.

thanks
francesco



On Fri, Apr 10, 2009 at 6:27 PM, Mostyn Lewis   
wrote:

If you want to find libimf.so, which is a shared INTEL library,
pass the library path with a -x on mpirun

mpirun  -x LD_LIBRARY_PATH 

DM


On Fri, 10 Apr 2009, Francesco Pietra wrote:


Hi Gus:

If you feel that the observations below are not relevant to openmpi,
please disregard the message. You have already kindly devoted so  
much

time to my problems.

The "limits.h" issue is solved with 10.1.022 intel compilers: as I
felt, the problem was with the pre-10.1.021 version of the intel C++
and ifort compilers, a subtle bug observed also by gentoo people  
(web

intel). There remains an orted issue.

The openmpi 1.3.1 installation was able to compile connectivity_c.c
and hello_c.c, though, running mpirun (output below between ===):

=
/usr/local/bin/mpirun -host -n 4 connectivity_c 2>&1 | tee
connectivity.out
/usr/local/bin/orted: error while loading shared libraries:  
libimf.so:

cannot open shared object file: No such file or directory
--
A daemon (pid 8472) died unexpectedly with status 127 while  
attempting

to launch so we are aborting.

There may be more information reported by the environment (see  
above).


This may be because the daemon was unable to find all the needed  
shared
libraries on the remote node. You may set your LD_LIBRARY_PATH to  
have the

location of the shared libraries on the remote nodes and this will
automatically be forwarded to the remote nodes.
--
--
mpirun noticed that the job aborted, but has no info as to the  
process

that caused that situation.
--
mpirun: clean termination accomplished
=

At this point, Amber10 serial compiled nicely (all intel, like
openmpi), but parallel compilation, as expected, returned the same
problem above:

=
export TESTsander=/usr/local/amber10/exe/sander.MPI; make
test.sander.BASIC
make[1]: Entering directory `/usr/local/amber10/test'
cd cytosine && ./Run.cytosine
orted: error while loading shared libraries: libimf.so: cannot open
shared object file: No such file or directory
--
A daemon (pid 8371) died unexpectedly with status 127 while  
attempting

to launch so we are aborting.

There may be more information reported by the environment (see  
above).


This may be because the daemon was unable to find all the needed  
shared
libraries on the remote node. You may set your LD_LIBRARY_PATH to  
have the

location of the shared libraries 

Re: [OMPI users] Debugging memory use of Open MPI

2009-04-14 Thread Shaun Jackman

Eugene Loh wrote:

ompi_info -a | grep eager
depends on the BTL.  E.g., sm=4K but tcp is 64K.  self is 128K.


Thanks, Eugene.

On the other hand, I assume the memory imbalance we're talking about 
is rather severe.  Much more than 2500 bytes to be noticeable, I 
would think.  Is that really the situation you're imagining?
The memory imbalance is drastic. I'm expecting 2 GB of memory use per 
process. The heaving processes (13/16) use the expected amount of 
memory; the remainder (3/16) misbehaving processes use more than twice 
as much memory. The specifics vary from run to run of course. So, yes, 
there is gigs of unexpected memory use to track down.


Umm, how big of a message imbalance do you think you might have?  (The 
inflection in my voice doesn't come out well in e-mail.)  Anyhow, that 
sounds like, um, "lots" of 2500-byte messages.


The message imbalance could be very large. Each process is running 
pretty close to its memory capacity. If a backlog of messages causes a 
buffer to grow to the point where the process starts swapping, it will 
very quickly fall very far behind. There are some billion 25-byte 
operations being sent in total or tens of millions MPI_Send messages 
(at 100 operations per MPI_Send).


Cheers,
Shaun


[OMPI users] Problem with MPI_File_read()

2009-04-14 Thread Jovana Knezevic
Hello everyone!

I have a problems using MPI_File_read() in C. Simple code below,
trying to read an integer prints to the standard output wrong result
(instead of 1 prints 825307441). I tried this function with 'MPI_CHAR'
datatype and it works. Probably I'm not using it properly for MPI_INT,
but I can't find what can be a problem anywhere in the literature, so
I would really appreciate if anyone of you could check out the code
below quickly and maybe give me some advice, or tell me what's wrong
with it.

Thanks a lot in advance.

Regards,
Jovana Knezevic


#include 
#include 
#include 

void
read_file (MPI_File *infile)
{
  MPI_Status status;
  int *buf;
  int i;
  buf = (int *)malloc( 5 * sizeof(int) );

  for(i=0; i<5; i++)
buf[i]=0;


  MPI_File_read(*infile, buf, 1, MPI_INT, &status);
  printf("%d\n", buf[0]);
}


int
main (int argc, char **argv)
{
  MPI_File infile1;
  int procID, nproc;

 MPI_Init (&argc, &argv);
 MPI_Comm_rank (MPI_COMM_WORLD, &procID);
 MPI_Comm_size (MPI_COMM_WORLD, &nproc);


 printf("begin\n");
 MPI_File_open(MPI_COMM_WORLD,"first.dat"
  ,MPI_MODE_RDONLY,MPI_INFO_NULL,&infile1);

 if(procID==0) {
   printf("proc0\n");
   read_file(&infile1);
 }
 else
   {
 printf("proc1\n");
   }
 MPI_File_close(&infile1);
 printf("end\n");


 MPI_Finalize();

 return EXIT_SUCCESS;

}


Re: [OMPI users] Problem with MPI_File_read()

2009-04-14 Thread Shaun Jackman

Hi Jovana,

825307441 is 0x31313131 in base 16 (hexadecimal), which is the string
`' in ASCII. MPI_File_read reads in binary values (not ASCII) just 
as the standard functions read(2) and fread(3) do.


So, your program is fine; however, your data file (first.dat) is not.

Cheers,
Shaun

Jovana Knezevic wrote:

Hello everyone!

I have a problems using MPI_File_read() in C. Simple code below,
trying to read an integer prints to the standard output wrong result
(instead of 1 prints 825307441). I tried this function with 'MPI_CHAR'
datatype and it works. Probably I'm not using it properly for MPI_INT,
but I can't find what can be a problem anywhere in the literature, so
I would really appreciate if anyone of you could check out the code
below quickly and maybe give me some advice, or tell me what's wrong
with it.

Thanks a lot in advance.

Regards,
Jovana Knezevic


#include 
#include 
#include 

void
read_file (MPI_File *infile)
{
  MPI_Status status;
  int *buf;
  int i;
  buf = (int *)malloc( 5 * sizeof(int) );

  for(i=0; i<5; i++)
buf[i]=0;


  MPI_File_read(*infile, buf, 1, MPI_INT, &status);
  printf("%d\n", buf[0]);
}


int
main (int argc, char **argv)
{
  MPI_File infile1;
  int procID, nproc;

 MPI_Init (&argc, &argv);
 MPI_Comm_rank (MPI_COMM_WORLD, &procID);
 MPI_Comm_size (MPI_COMM_WORLD, &nproc);


 printf("begin\n");
 MPI_File_open(MPI_COMM_WORLD,"first.dat"
  ,MPI_MODE_RDONLY,MPI_INFO_NULL,&infile1);

 if(procID==0) {
   printf("proc0\n");
   read_file(&infile1);
 }
 else
   {
 printf("proc1\n");
   }
 MPI_File_close(&infile1);
 printf("end\n");


 MPI_Finalize();

 return EXIT_SUCCESS;

}
___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users



Re: [OMPI users] Debugging memory use of Open MPI

2009-04-14 Thread Eugene Loh

Shaun Jackman wrote:


Eugene Loh wrote:

On the other hand, I assume the memory imbalance we're talking 
about is rather severe.  Much more than 2500 bytes to be 
noticeable, I would think.  Is that really the situation you're 
imagining?


The memory imbalance is drastic. I'm expecting 2 GB of memory use 
per process. The heaving processes (13/16) use the expected amount 
of memory; the remainder (3/16) misbehaving processes use more than 
twice as much memory. The specifics vary from run to run of course. 
So, yes, there is gigs of unexpected memory use to track down.


Umm, how big of a message imbalance do you think you might have?  
(The inflection in my voice doesn't come out well in e-mail.)  
Anyhow, that sounds like, um, "lots" of 2500-byte messages.


The message imbalance could be very large. Each process is running 
pretty close to its memory capacity. If a backlog of messages causes a 
buffer to grow to the point where the process starts swapping, it will 
very quickly fall very far behind. There are some billion 25-byte 
operations being sent in total or tens of millions MPI_Send messages 
(at 100 operations per MPI_Send).


Okay.  Attached is a "little" note I wrote up illustrating memory 
profiling with Sun tools.  (It's "big" because I ended up including a 
few screenshots.)  The program has a bunch of one-way message traffic 
and some user-code memory allocation.  I then rerun with the receiver 
sleeping before jumping into action.  The messages back up and OMPI ends 
up allocating a bunch of memory.  The tools show you who (user or OMPI) 
is allocating how much memory and how big of a message backlog develops 
and how the sender starts stalling out (which is a good thing!).  
Anyhow, a useful exercise for me and hopefully helpful for you.


memory-profiling.tar.gz
Description: CPIO file


Re: [OMPI users] Debugging memory use of Open MPI

2009-04-14 Thread Shaun Jackman

Eugene Loh wrote:
Okay.  Attached is a "little" note I wrote up illustrating memory 
profiling with Sun tools.  (It's "big" because I ended up including a 
few screenshots.)  The program has a bunch of one-way message traffic 
and some user-code memory allocation.  I then rerun with the receiver 
sleeping before jumping into action.  The messages back up and OMPI ends 
up allocating a bunch of memory.  The tools show you who (user or OMPI) 
is allocating how much memory and how big of a message backlog develops 
and how the sender starts stalling out (which is a good thing!).  
Anyhow, a useful exercise for me and hopefully helpful for you.


Wow. Thanks, Eugene. I definitely have to look into the Sun HPC 
ClusterTools. It looks as though it could be very informative.


What's the purpose of the 400 MB that MPI_Init has allocated?

The figure of in-flight messages vs time when the receiver sleeps is 
particularly interesting. The sender appears to stop sending and block 
once there are 30'000 in-flight messages. Has Open MPI detected the 
situation of congestion and begun waiting for the receiver to catch 
up? Or is it something simpler, such as the underlying write(2) call 
to the TCP socket blocking? If it's the first case, perhaps I could 
tune this threshold to behave better for my application.


Cheers,
Shaun


Re: [OMPI users] Debugging memory use of Open MPI

2009-04-14 Thread Chris Gottbrath


Shaun,

These all look like fine suggestions.

Another tool you should consider using for this problem or others like
it in the future is TotalView. It seems like there
are two related questions in your current troubleshooting scenario:

1. is the memory being used where you think it is?

2. is there really an imbalance between send/receives that is clogging
the unexpected queue?

I'd fire up the application under TotalView with memory debugging
enabled (one of the checkboxes
that will be right there when you start the debugger).

Once you have run to the point where you are seeing the memory
imbalance (and you don't have to wait for it to
get "bad" it can just be "noticeable"). Then you want to stop all the
processes by clicking stop.

Then open the memory debugging window from the "debug" menu item.

Then check the "memory statistics" view to make sure that you know
which MPI process
it is that is using more memory than the others.

Is the difference in the "heap memory"? I'm guessing it will be, but I
suppose there is always the possibility I'm
wrong so it is good to check. The memory statistics view should show
different kinds of memory.

Then select the process that is using more memory (we can call it the
process of interest) and run a "heap status" report.
This should tell you "where" your memory usage is coming from in your
program. You should get stack backtraces for all
the allocations. Depending on the magnitude of the memory usage it may
"pop right out" in the numbers or you might have to
dig a bit. I'm not sure exactly what the backtrace of the kind of
memory allocation you are talking about would look like..

One great way to pick up on more "subtle" allocations is to compare
the memory usage of a process that is behaving correctly
and the process that is behaving incorrectly.

You can do that by selecting two processes and doing a "memory
comparison" -- that will basically filter all the allocations "out of
view"
that are "the same (in terms of backtrace)" between the two processes.
If you have several hundred extra allocations from the OpenMPI
runtime on the one processes they should be easier to find in the
difference view. If the two processes have other differences you'll
get a longer
list but if you know your code you'll hopefully be able to quickly
eliminate the ones that are "expected differences".

It sounds like you have a strong working hypothesis. However, it might
be useful to run a memory leak check on the process of interest..
as that is another common way to get a process that starts taking up a
lot of extra memory. If your working hypothesis is correct your
process of interest should come back "clean" in terms of leaks.

Another technique that TotalView will give you the ability to bring to
bear is inspection of the MPI message queues. This can be done, again,
while the processes are stopped once the memory imbalance is
"noticeable". Click on the tools menu and select "message queue graph".
That should bring up a graphical display of the state of the MPI
message queues in all of your MPI processes. If your hypothesis is
correct there should be an extremely large number of unexpected
messages shown for your process of interest.

One of the nice things about this view, when compared to the MPI
tracing tools mentioned previously, is that it will only show you the
messages
which are in the queues at the point in time where you paused all the
MPI tasks..  which may be a lot of messages, but it is likely to be
many orders of magnitude lower than the number of MPI messages
displayed on the trace.

TV is commercial but a 15 day evaluation license can be obtained here

http://www.totalviewtech.com/download/index.html

5 minute Videos on Memory debugging and MPI debugging (that go over
some, but probably not all of the things that I discussed above) are
available here

http://www.totalviewtech.com/support/videos.html#0

Don't hesitate to contact me if you want help, the guys at 
"supp...@totalviewtech.com 
"
can also help and are available during a product evaluation.

Oh, and I should mention that there is a free version of TotalView  
available for students. :)

Cheers,
Chris

Chris Gottbrath, 508-652-7735 or 774-270-3155
Director of Product Management, TotalView Technologies  
chris.gottbr...@totalviewtech.com
--
Learn how to radically simplify your debugging:
http://www.totalviewtech.com/support/white_papers.html?id=163

On Apr 14, 2009, at 4:54 PM, Eugene Loh wrote:

> Shaun Jackman wrote:
>
>> Eugene Loh wrote:
>>
> On the other hand, I assume the memory imbalance we're talking
> about is rather severe.  Much more than 2500 bytes to be
> noticeable, I would think.  Is that really the situation you're
> imagining?

 The memory imbalance is drastic. I'm expecting 2 GB of memory use
 per process. The heaving processes (13/16) use the expected
 amount of memory; the remainder (3/16) misbehaving processes use
 more than twice as mu

[OMPI users] Problem with MPI_File_read() (2)

2009-04-14 Thread Jovana Knezevic
>
>  Hi Jovana,
>
>  825307441 is 0x31313131 in base 16 (hexadecimal), which is the string
>  `' in ASCII. MPI_File_read reads in binary values (not ASCII) just
>  as the standard functions read(2) and fread(3) do.
>
>  So, your program is fine; however, your data file (first.dat) is not.
>
>  Cheers,
>  Shaun
>

Thank you very much, Shaun! Ok, now I realise it's really stupid that
I was trying so hard to get the result that I wanted :-)
Well, it seems it's not a problem if I'm just reading with
MPI_File_read and writing with MPI_File_write, but if I try to do some
calculations with the data I read, it doesn't work... Do you maybe
have some idea how one can deal with this ( I have an input file for
my project - much larger code than the sample I gave last time -
consisting of integers, doubles, characters and so on... Maybe it's a
silly question, but can I convert my input file somehow into something
so that it works? :-) Any ideas would help.
Thanks again.

Cheers,
Jovana


Re: [OMPI users] Debugging memory use of Open MPI

2009-04-14 Thread Eugene Loh

Shaun Jackman wrote:

Wow. Thanks, Eugene. I definitely have to look into the Sun HPC 
ClusterTools. It looks as though it could be very informative.


Great.  And, I didn't mean to slight TotalView.  I'm just not familiar 
with it.



What's the purpose of the 400 MB that MPI_Init has allocated?


It's for... um, I don't know.  Let's see...

About a third of it appears to be
vt_open() -> VTThrd_open() -> VTGen_open
which I'm guessing is due to the VampirTrace instrumentation (maybe 
allocating the buffers into which the MPI tracing data is collected).  
It seems to go away if one doesn't collect message-tracing data.


Somehow, I can't see further into the library.  Hmm.  It does seem like 
a bunch.  The shared-memory area (which MPI_Init allocates for on-node 
message passing) is much smaller.  The remaining roughly 130 
Mbyte/process seems to be independent of the number of processes.


An interesting exercise for the reader.

The figure of in-flight messages vs time when the receiver sleeps is 
particularly interesting. The sender appears to stop sending and block 
once there are 30'000 in-flight messages. Has Open MPI detected the 
situation of congestion and begun waiting for the receiver to catch 
up? Or is it something simpler, such as the underlying write(2) call 
to the TCP socket blocking? If it's the first case, perhaps I could 
tune this threshold to behave better for my application.


This particular case is for two on-node processes.  So, no TCP is 
involved.  There appear to be about 55K allocations, which looks like 
the 85K peak minus the 30K at which the sender stalls.  So, maybe some 
resource got exhausted at that point.  Dunno.


Anyhow, this may be starting to get into more detail than you (or I) 
need to understand to address your problem.  It *is* interesting stuff, 
though.


Re: [OMPI users] Problem with MPI_File_read() (2)

2009-04-14 Thread Jeff Squyres
In general, files written by MPI_File_write (and friends) are only  
guaranteed to be readable by MPI_File_read (and friends).  So if you  
have an ASCII input file, or even a binary input file, you might need  
to read it in with traditional/unix file read functions and then write  
it out with MPI_File_write.  Then your parallel application will be  
able to use the various MPI_File_* functions to read the file at run- 
time.  Hence, there's no real generic  ->   
convertor; you'll need to write your own that is specific to your data.


Make sense?


On Apr 14, 2009, at 6:50 PM, Jovana Knezevic wrote:


>
>  Hi Jovana,
>
>  825307441 is 0x31313131 in base 16 (hexadecimal), which is the  
string
>  `' in ASCII. MPI_File_read reads in binary values (not ASCII)  
just

>  as the standard functions read(2) and fread(3) do.
>
>  So, your program is fine; however, your data file (first.dat) is  
not.

>
>  Cheers,
>  Shaun
>

Thank you very much, Shaun! Ok, now I realise it's really stupid that
I was trying so hard to get the result that I wanted :-)
Well, it seems it's not a problem if I'm just reading with
MPI_File_read and writing with MPI_File_write, but if I try to do some
calculations with the data I read, it doesn't work... Do you maybe
have some idea how one can deal with this ( I have an input file for
my project - much larger code than the sample I gave last time -
consisting of integers, doubles, characters and so on... Maybe it's a
silly question, but can I convert my input file somehow into something
so that it works? :-) Any ideas would help.
Thanks again.

Cheers,
Jovana
___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users



--
Jeff Squyres
Cisco Systems



Re: [OMPI users] Debugging memory use of Open MPI

2009-04-14 Thread Chris Gottbrath

Eugene,

On Apr 14, 2009, at 7:10 PM, Eugene Loh wrote:
> Shaun Jackman wrote:
>
>> Wow. Thanks, Eugene. I definitely have to look into the Sun HPC  
>> ClusterTools. It looks as though it could be very informative.
>
> Great.  And, I didn't mean to slight TotalView.  I'm just not  
> familiar with it.
>


No slight taken -- at least by this "TotalView Guy".  Took a look at  
your document. Cool stuff.

Tracing is a different technique than interactive debugging and  
provides a lot of information. There are certainly some
problems that can only be looked at with tracing-type techniques.

Cheers,
Chris


Chris Gottbrath, 508-652-7735 or 774-270-3155
Director of Product Management, TotalView Technologies  
chris.gottbr...@totalviewtech.com
--
Learn how to radically simplify your debugging:
http://www.totalviewtech.com/support/white_papers.html?id=163





**
This transmission contains confidential and/or legally privileged information 
from
TotalView Technologies intended only for the use of the individual(s) to which 
it is
addressed. If you are not the intended recipient, you are hereby notified that 
any disclosure,
copying or distribution of this information or the taking of any action in 
reliance on the
contents of this transmission is strictly prohibited. If you have received this 
transmission
in error, please notify us immediately.
**