On Jun 26, 2007, at 6:08 PM, Tim Prins wrote:
Some time ago you were working on moving the modex out of the pml
and cleaning
it up a bit. Is this work still ongoing? The reason I ask is that I am
currently working on integrating the RSL, and would rather build on
the new
code rather than th
George Bosilca wrote:
Karol,
We (the folks at UTK) implemented a SCTP BTL. It's not yet in the
trunk, but it will get there shortly. Instead of starting from
scratch, it might be a good idea to start directly from there.
Thanks for the reply. This BTL would definitely be worth taking a look
a
Gleb,
Simplifying the code and getting better performance is always a good
approach (at least from my perspective). However, your patch still
dispatch the messages over the BTLs in a round robin fashion, which
doesn't look to me as the best approach. How about merging your patch
and mine
Karol,
We (the folks at UTK) implemented a SCTP BTL. It's not yet in the
trunk, but it will get there shortly. Instead of starting from
scratch, it might be a good idea to start directly from there.
To answer your question, the TCP BTL use a copy of the original
iovec. After each write, t
Hello Peter,
in 1.2.2 the allgatherv is called from coll basic component,
and is implemented as a gatherv followed by a broadcast.
The broadcast is executed with single element of MPI_TYPE_INDEXED.
The decision function in coll tuned makes mistake of using segmented
broadcast algorithm for th
Hello... I'm a student at the University of British Columbia working on
creating an SCTP BTL for Open MPI. I have a simple implementation
working that uses SCTPs one-to-one style sockets for sending messages.
The same writev()/readv() calls that are used in the TCP BTL are used in
this new BTL.
On Fri, Jun 22, 2007 at 04:52:45PM -0400, Jeff Squyres wrote:
> On Jun 20, 2007, at 8:29 AM, Jeff Squyres wrote:
>
> >1. btl_*_min_send_size is used to decide when to stop striping a
> >message across multiple BTL's. Is there a reason that we don't
> >just use eager_limit for this value? It
Hello all,
I temporarily worked around my former problem by using synchronous
communication and shifting the initialization
into the first call of a collective operation.
But nevertheless, I found a performance bug in btl_openib.
When I execute the attached sendrecv.c on 4 (or more) nodes of
I assume you mean something like mca_coll_foo_init_query() for your
initialization function. And I'm guessing you're exchanging some sort
of address information for your network here?
What I actually did in my collective component was use PML's modex
(module exchange) facility, defined in
om
Hello,
When running the Intel MPI Benchmark (IMB) on our cluster
(Sun X2200M2 nodes, Voltaire DDRx Infiniband, OFED-1.1) we
see rather strange (i.e., unreasonably bad) performance for the
Allgatherv part of the IMP when using OpenMPI-1.2.2. The
performance figures reported by the IMB are provided
Hello,
I'm working on a collective component and need point-to-point
communication during module-initialization.
As BTL is initialized prior to the collectives, I tried to use send and
recv like MPI_Send/_Recv do:
err = MCA_PML_CALL(send(buf, size, MPI_CHAR, to_id,
COLL_SCI_TAG, MCA_P
11 matches
Mail list logo