On Aug 29, 2005, at 9:17 PM, Brad Penoff wrote:
PML: Pretty much the same as it was described in the paper. Its
interface is basically MPI semantics (i.e., it sits right under
MPI_SEND and the rest).
BTL: Byte Transfer Layer; it's the next generation of PTL. The
BTL is
much more simple than the PTL, and removes all vestigaes of any MPI
semantics that still lived in the PTL. It's a very simple byte mover
layer, intended to make it quite easy to implement new network
interfaces.
I was curious about what you meant by the removal of MPI
semantics. Do
you mean it simply has no notion of tags, ranks, etc? In other words,
does it simply put the data into some sort of format so that the
PML can
operate on with its own state machine?
I don't recall the details (it's been quite a while since I looked at
the PTL), but there was some semblance of MPI semantics that creeped
down into the PTL interface itself. The BTL has none of that -- it's
purely a byte mover.
Also, say you had some underlying protocol that allowed unordered
delivery
of data (so not fully ordered like TCP); which "layer" would the
notion of
"order" be handled in? I'm guessing PML would need some sort of
sequence
number attached to it; is that right?
Correct. That was in the PML in the 2nd gen stuff and is still at
the PML in the 3rd gen stuff.
BML: BTL Management Layer; this used to be part of the PML but we
recently split it off into its own framework. It's mainly the
utility
gorp of managing multiple BTL modules in a single process. This was
done because when working with the next generation of collectives,
MPI-2 IO, and MPI-2 one sided operations, we want to have the ability
to use the PML (which the collectives do today, for example) or to be
able to dive right down and directly use the BTLs (i.e., cut out a
little latency).
In the cases where the BML is required, does it cost extra memcpy's?
Not to my knowledge. Galen -- can you fill in the details of this
question and the rest of Brad's questions?
Thanks!
--
{+} Jeff Squyres
{+} The Open MPI Project
{+} http://www.open-mpi.org/