Can you try v1.7.1?
We did a major Fortran revamp in the 1.7.x series to bring it up to speed with
MPI-3 Fortran stuff (at least mostly). I mention MPI-3 because the name-based
parameter passing stuff wasn't guaranteed until MPI-3. I think 1.7.x should
have gotten all the name-based parameter
Sorry for the huge latency in reply.
I assume that you know that static linking is not for the meek -- there are
many twists and turns and pitfalls (e.g.,
http://www.open-mpi.org/faq/?category=openfabrics#ib-static-mpi-apps).
Did you also try --disable-dlopen? That will disable OMPI's use of l
Hi,
The MPI Standard specifies to use 'ierror' for the final argument in
most Fortran MPI calls. However the Openmpi f90 module defines it as
being 'ierr'. This messes up those who want to use keyword=value syntax
in their calls.
I just checked the latest 1.6.4 release and it is still brok
> > Intel has acquired the InfiniBand assets of QLogic
> > about a year ago. These SDR HCAs are no longer supported, but should
> > still work.
[Tom]
I guess the more important part of what I wrote is that " These SDR HCAs are no
longer supported" :)
>
> Do you mean they should work with the l
On Apr 24, 2013, at 8:58 AM, Dave Love wrote:
> "Elken, Tom" writes:
>
>>> I have seen it recommended to use psm instead of openib for QLogic cards.
>> [Tom]
>> Yes. PSM will perform better and be more stable when running OpenMPI
>> than using verbs.
>
> But unfortunately you won't be able
"Elken, Tom" writes:
>> I have seen it recommended to use psm instead of openib for QLogic cards.
> [Tom]
> Yes. PSM will perform better and be more stable when running OpenMPI
> than using verbs.
But unfortunately you won't be able to checkpoint.
> Intel has acquired the InfiniBand assets of
Thanks Jeff! That's very helpful.
Cheers!
Jacky
On Wed, Apr 24, 2013 at 10:56 AM, Jeff Squyres (jsquyres) <
jsquy...@cisco.com> wrote:
> On Apr 24, 2013, at 10:24 AM, Thomas Watson
> wrote:
>
> > I still have a couple of questions to ask:
> >
> > 1. In both MPI_THREAD_FUNNELED and MPI_THREAD_
On Apr 24, 2013, at 10:24 AM, Thomas Watson wrote:
> I still have a couple of questions to ask:
>
> 1. In both MPI_THREAD_FUNNELED and MPI_THREAD_SERIALIZED modes, the MPI calls
> are serialized at only one thread (in the former case, only the rank main
> thread can make MPI calls, while in th
On Wed, Apr 24, 2013 at 05:01:43PM +0400, Derbunovich Andrei wrote:
> Thank you to everybody for suggestions and comments.
>
> I have used relatively small number of nodes (4400). It looks like that
> the main issue that I didn't disable dynamic components opening in my
> openmpi build while kee
Hi Nick,
Thanks for your detailed info. In my case, I expect to spawn multiple
threads from each MPI process. I could use MPI_THREAD_FUNNELED
or MPI_THREAD_SERIALIZED to do so - I think MPI_THREAD_MULTIPLE is not
supported on InfiniBand, which I am using. Currently, I use OpenMPI +
Boost::Thread -
On Apr 24, 2013, at 6:01 AM, Derbunovich Andrei
wrote:
> Thank you to everybody for suggestions and comments.
>
> I have used relatively small number of nodes (4400). It looks like that
> the main issue that I didn't disable dynamic components opening in my
> openmpi build while keeping MPI in
Thank you to everybody for suggestions and comments.
I have used relatively small number of nodes (4400). It looks like that
the main issue that I didn't disable dynamic components opening in my
openmpi build while keeping MPI installation directory on network file
system. Oh my god!
I didn't che
Hello,
Currently I am investigating the new nonblocking collectives introduced in
MPI-3 which are implemented in Open MPI 1.7.1. As a first try I took
MPI_Ibcast.
According to the MPI-3 spec my understanding is that MPI_Ibcast + MPI_Wait
should be equivalent to a MPI_Bcast - except, that the a
On Apr 23, 2013, at 8:45 PM, Mike Clark wrote:
> Hi,
>
> Just to follow up on this. We have managed to get OpenMPI to run at large
> scale, to do so we had to use aprun instead of using openmpi's mpirun
> command.
In general, using direct launch will be faster than going thru mpirun. However,
14 matches
Mail list logo