Re: [OMPI users] ierr vs ierror in F90 mpi module

2013-04-24 Thread Jeff Squyres (jsquyres)
Can you try v1.7.1? We did a major Fortran revamp in the 1.7.x series to bring it up to speed with MPI-3 Fortran stuff (at least mostly). I mention MPI-3 because the name-based parameter passing stuff wasn't guaranteed until MPI-3. I think 1.7.x should have gotten all the name-based parameter

Re: [OMPI users] LDFLAGS & static compilation & linking

2013-04-24 Thread Jeff Squyres (jsquyres)
Sorry for the huge latency in reply. I assume that you know that static linking is not for the meek -- there are many twists and turns and pitfalls (e.g., http://www.open-mpi.org/faq/?category=openfabrics#ib-static-mpi-apps). Did you also try --disable-dlopen? That will disable OMPI's use of l

[OMPI users] ierr vs ierror in F90 mpi module

2013-04-24 Thread W Spector
Hi, The MPI Standard specifies to use 'ierror' for the final argument in most Fortran MPI calls. However the Openmpi f90 module defines it as being 'ierr'. This messes up those who want to use keyword=value syntax in their calls. I just checked the latest 1.6.4 release and it is still brok

Re: [OMPI users] QLogic HCA random crash after prolonged use

2013-04-24 Thread Elken, Tom
> > Intel has acquired the InfiniBand assets of QLogic > > about a year ago. These SDR HCAs are no longer supported, but should > > still work. [Tom] I guess the more important part of what I wrote is that " These SDR HCAs are no longer supported" :) > > Do you mean they should work with the l

Re: [OMPI users] QLogic HCA random crash after prolonged use

2013-04-24 Thread Ralph Castain
On Apr 24, 2013, at 8:58 AM, Dave Love wrote: > "Elken, Tom" writes: > >>> I have seen it recommended to use psm instead of openib for QLogic cards. >> [Tom] >> Yes. PSM will perform better and be more stable when running OpenMPI >> than using verbs. > > But unfortunately you won't be able

Re: [OMPI users] QLogic HCA random crash after prolonged use

2013-04-24 Thread Dave Love
"Elken, Tom" writes: >> I have seen it recommended to use psm instead of openib for QLogic cards. > [Tom] > Yes. PSM will perform better and be more stable when running OpenMPI > than using verbs. But unfortunately you won't be able to checkpoint. > Intel has acquired the InfiniBand assets of

Re: [OMPI users] Using Boost::Thread for multithreading within OpenMPI processes

2013-04-24 Thread Thomas Watson
Thanks Jeff! That's very helpful. Cheers! Jacky On Wed, Apr 24, 2013 at 10:56 AM, Jeff Squyres (jsquyres) < jsquy...@cisco.com> wrote: > On Apr 24, 2013, at 10:24 AM, Thomas Watson > wrote: > > > I still have a couple of questions to ask: > > > > 1. In both MPI_THREAD_FUNNELED and MPI_THREAD_

Re: [OMPI users] Using Boost::Thread for multithreading within OpenMPI processes

2013-04-24 Thread Jeff Squyres (jsquyres)
On Apr 24, 2013, at 10:24 AM, Thomas Watson wrote: > I still have a couple of questions to ask: > > 1. In both MPI_THREAD_FUNNELED and MPI_THREAD_SERIALIZED modes, the MPI calls > are serialized at only one thread (in the former case, only the rank main > thread can make MPI calls, while in th

Re: [OMPI users] OpenMPI at scale on Cray XK7

2013-04-24 Thread Nathan Hjelm
On Wed, Apr 24, 2013 at 05:01:43PM +0400, Derbunovich Andrei wrote: > Thank you to everybody for suggestions and comments. > > I have used relatively small number of nodes (4400). It looks like that > the main issue that I didn't disable dynamic components opening in my > openmpi build while kee

Re: [OMPI users] Using Boost::Thread for multithreading within OpenMPI processes

2013-04-24 Thread Thomas Watson
Hi Nick, Thanks for your detailed info. In my case, I expect to spawn multiple threads from each MPI process. I could use MPI_THREAD_FUNNELED or MPI_THREAD_SERIALIZED to do so - I think MPI_THREAD_MULTIPLE is not supported on InfiniBand, which I am using. Currently, I use OpenMPI + Boost::Thread -

Re: [OMPI users] OpenMPI at scale on Cray XK7

2013-04-24 Thread Ralph Castain
On Apr 24, 2013, at 6:01 AM, Derbunovich Andrei wrote: > Thank you to everybody for suggestions and comments. > > I have used relatively small number of nodes (4400). It looks like that > the main issue that I didn't disable dynamic components opening in my > openmpi build while keeping MPI in

Re: [OMPI users] OpenMPI at scale on Cray XK7

2013-04-24 Thread Derbunovich Andrei
Thank you to everybody for suggestions and comments. I have used relatively small number of nodes (4400). It looks like that the main issue that I didn't disable dynamic components opening in my openmpi build while keeping MPI installation directory on network file system. Oh my god! I didn't che

[OMPI users] Open MPI 1.7.1 and nonblocking bcast questions

2013-04-24 Thread Christoph Niethammer
Hello, Currently I am investigating the new nonblocking collectives introduced in MPI-3 which are implemented in Open MPI 1.7.1. As a first try I took MPI_Ibcast. According to the MPI-3 spec my understanding is that MPI_Ibcast + MPI_Wait should be equivalent to a MPI_Bcast - except, that the a

Re: [OMPI users] OpenMPI at scale on Cray XK7

2013-04-24 Thread Ralph Castain
On Apr 23, 2013, at 8:45 PM, Mike Clark wrote: > Hi, > > Just to follow up on this. We have managed to get OpenMPI to run at large > scale, to do so we had to use aprun instead of using openmpi's mpirun > command. In general, using direct launch will be faster than going thru mpirun. However,