Yes and no re the dependency. Without async_modex, the cutoff will save you
memory footprint but not result in any launch performance benefit.
Likewise, turning on async_modex without being over the cutoff won't do you
any good as you'll immediately demand all the modex data.
So they are kinda rel
On Feb 4, 2016, at 9:18 AM, Ralph Castain wrote:
>
> +1, with an addition and modification:
>
> * add the async_modex on by default
> * make the change in master and let it "stew" for awhile before moving to
> 2.0. I believe only Cisco has been running MTT against that setup so far.
It's been
Hi Durga
as an alternative you could implement a libfabric provider for your
network. In theory, if you can implement the reliable datagram endpoint
type on your network and a tag matching mechanism, you could then just use
the ofi mtl and not have to do much if anything in open mpi or mpich etc
+1, with an addition and modification:
* add the async_modex on by default
* make the change in master and let it "stew" for awhile before moving to
2.0. I believe only Cisco has been running MTT against that setup so far.
On Thu, Feb 4, 2016 at 6:04 AM, Gilles Gouaillardet <
gilles.gouaillar...
+1
should we also enable sparse groups by default ?
(or at least on master, and then v2.x later)
Cheers,
Gilles
On Thursday, February 4, 2016, Joshua Ladd wrote:
> +1
>
>
> On Wed, Feb 3, 2016 at 9:54 PM, Jeff Squyres (jsquyres) <
> jsquy...@cisco.com >
> wrote:
>
>> WHAT: Decrease default va
+1
On Wed, Feb 3, 2016 at 9:54 PM, Jeff Squyres (jsquyres)
wrote:
> WHAT: Decrease default value of mpi_add_procs_cutoff from 1024 to 32
>
> WHY: The "partial add procs" behavior is supposed to be a key feature of
> v2.0.0
>
> WHERE: ompi/mpi/runtime/ompi_mpi_params.c
>
> TIMEOUT: Next Tuesday
+1 on what Gilles said. :-)
Check out this part of the v1.10 README file:
https://github.com/open-mpi/ompi-release/blob/v1.10/README#L585-L625
Basically:
- PML is the back-end to functions like MPI_Send and MPI_Recv.
- The ob1 PML uses BTL plugins in a many-of-many relationship to potentia
+1 on what Gilles said. A little more detail:
1. You can simply write your own "MPI_Bcast" and interpose your version before
Open MPI's version. E.g.,:
-
$ cat your_program.c
#include
int MPI_Bcast(void *buffer, int count, MPI_Datatype datatype,
int root, MPI_Comm comm)
{
/
Hi,
this is difficult to answer such a generic request.
MPI symbols (MPI_Bcast, ...) are defined as weak symbols, so the simplest
option is to redefine them an implement them the way you like. you are
always able to invoke PMPI_Bcast if you want to invoke the openmpi
implementation.
a more ompi-
Hello
Using a new network interface and its ad-hoc routing algorithms I
would like to try my own custom implementation of some collective
communication patterns(MPI_Bcast,MPI_Alltoall,...) without expanding
those collective communications as series of point-to-point ones based
on a given
Durga,
did you confuse PML and MTL ?
basically, a BTL (Byte Transport Layer ?) is used with "primitive"
interconnects that can only send bytes.
(e.g. if you need to transmit a tagged message, it is up to you
send/recv the tag and manually match the tag on the receiver side so you
can put the
Hi developers
I am trying to add support for a new (proprietary) RDMA capable fabric
to OpenMPI and have the following question:
As I understand, some networks are implemented as a PML framework and
some are implemented as a BTL framework. It seems there is even
overlap as Myrinet seems to exist
12 matches
Mail list logo