The two files have a slightly different format and completely different
meaning. The hostfile specifies how many slots are on a node. The rankfile
specifies a rank and what node/slot it is to be mapped onto.
Rankfiles can use relative node indexing and refer to nodes received from a
resource manage
A few addendums in no particular order...
1. The ompi/ tree is the MPI layer. It's the top layer in the stack.
It uses ORTE and OPAL for various things.
2. The PML (point-to-point messagging layer) is the stuff right behind
MPI_SEND, MPI_RECV, and friends. We have two main PMLs: OB1 and
Is the rcache/rb component (on the trunk) defunct?
If so, can we remove it?
--
Jeff Squyres
Cisco Systems
In order to use "mpirun --rankfile", I also need to specify
hosts/hostlist. But that information is redundant with what I provide
in the rankfile. So, from a user's point of view, this strikes me as
broken. Yes? Should I file a ticket, or am I missing something here
about this functionality
Hi Ralph,
Thanks for the response. And Yes, this give me a good starting point Thanks.
Leo.P
From: Ralph Castain
To: Open MPI Developers
Sent: Thursday, 18 June, 2009 9:26:46 PM
Subject: Re: [OMPI devel] some question about OMPI communication infrastructure
On Jun 18, 2009, at 11:25 AM, Sylvain Jeaugey wrote:
My problem seems related to library generation through RPM, not with
1.3.2, nor the patch.
I'm not sure I understand -- is there something we need to fix in our
SRPM?
--
Jeff Squyres
Cisco Systems
Ok, never mind.
My problem seems related to library generation through RPM, not with
1.3.2, nor the patch.
Sylvain
On Thu, 18 Jun 2009, Sylvain Jeaugey wrote:
Hi all,
Until Open MPI 1.3 (maybe 1.3.1), I used to find it convenient to be able to
move a library from its "normal" place (eithe
FWIW, using OPAL_PREFIX seems to work for me on the trunk and the head
of the v1.3 branch...?
On Jun 18, 2009, at 4:55 AM, Sylvain Jeaugey wrote:
Hi all,
Until Open MPI 1.3 (maybe 1.3.1), I used to find it convenient to be
able
to move a library from its "normal" place (either /usr or /
On Jun 17, 2009, at 9:27 AM, Leo P. wrote:
I found openMPI community filled with co-operative and helpful
people. And would like to thank them through this email [Nik,
Eugene, Ralph, Mitchel and others].
Thanks! We try.
Also I would like to suggest one or may be two things.
1. First of
Paul H. Hargrove wrote:
Jeff Squyres wrote:
[snip]
Erm -- that's weird. So when you extract the tarballs,
atomic-amd64-linux.s is non-empty (as it should be), but after a
failed build, it's file length 0?
Notice that during the build process, we sym link atomic-amd64-linux.s
to atomic-asm.S
Hi Leo
The MPI communications is contained in the ompi/mca/btl code area. The
BTL's (Bit Transport Layer) actually moves the message data. Each BTL
is responsible for opening its own connections - ORTE has nothing to
do with it, except to transport out-of-band (OOB) messages to support
cr
Hi Everyone,
I wanted to ask some questions about things I am having trouble understanding.
1. As far as my understanding of MPI_INIT function, I assumed MPI_INIT
typically procedure resources required including the sockets. But now as I
understand from the documentation that open
Hi all,
Until Open MPI 1.3 (maybe 1.3.1), I used to find it convenient to be able
to move a library from its "normal" place (either /usr or /opt) to
somewhere else (i.e. my NFS home account) to be able to try things only on
my account.
So, I used to set OPAL_PREFIX to the root of the Open MP
13 matches
Mail list logo