+1 on what Gilles said.  A little more detail:

1. You can simply write your own "MPI_Bcast" and interpose your version before 
Open MPI's version.  E.g.,:

-----
$ cat your_program.c
#include <mpi.h>

int MPI_Bcast(void *buffer, int count, MPI_Datatype datatype,
              int root, MPI_Comm comm)
{
// Whatever you want your Bcast to do
}

int main(int argc, char* argv[])
{
    MPI_Init(NULL, NULL);
    MPI_Bcast(...);
    MPI_Finalize()
    return 0;
}
-----

If you need to call MPI functions inside your MPI_Bcast, call them with "PMPI" 
instead of "MPI".  E.g., call "PMPI_Send(...)" instead of "MPI_Send(...)".  
This guarantees that the back-end Open MPI versions of those functions will be 
called instead of your versions (if you end up overriding more than MPI_Bcast, 
for example).

I showed a trivial example above where everything is in one file -- but you can 
also do more complicated examples where you group all your MPI_* function 
overrides in a library that you link before/to the left of the actual Open MPI 
library on the command line.

2. As Gilles mentioned, you can write your own Open MPI collectives component.  
This will have the back-end Open MPI infrastructure call your routine(s) when 
MPI_Bcast (and friends) are invoked by the application.

Option #2 is a bit more complex than option #1.  If you're just looking to test 
some algorithms and generally play around a little, option #1 is probably what 
you want to do.


> On Feb 4, 2016, at 5:42 AM, Gilles Gouaillardet 
> <[email protected]> wrote:
> 
> Hi,
> 
> this is difficult to answer such a generic request.
> 
> MPI symbols (MPI_Bcast, ...) are defined as weak symbols, so the simplest 
> option is to redefine them an implement them the way you like. you are always 
> able to invoke PMPI_Bcast if you want to invoke the openmpi implementation.
> 
> a more ompi-ish way is to create your own collective module.
> for example, the default module is in ompi/mca/coll/tuned
> 
> Cheers,
> 
> Gilles
> 
> On Thursday, February 4, 2016, <[email protected]> wrote:
> Hello
> 
> Using a new network interface and its ad-hoc routing algorithms I would like 
> to try my own custom implementation of some collective communication 
> patterns(MPI_Bcast,MPI_Alltoall,...) without expanding those collective 
> communications as series of point-to-point ones based on a given predefined 
> process topology.
> 
> In addition my routing methods might require additional parameters, rather 
> than the basic destination lists obtained from that topology and the kind of 
> collective communication considered.
> 
> How would I do that ?
> 
> In which component should I modilfy something ?
> 
> 
> Regards
> 
> 
> _______________________________________________
> devel mailing list
> [email protected]
> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/devel
> Link to this post: 
> http://www.open-mpi.org/community/lists/devel/2016/02/18546.php
> _______________________________________________
> devel mailing list
> [email protected]
> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/devel
> Link to this post: 
> http://www.open-mpi.org/community/lists/devel/2016/02/18547.php


-- 
Jeff Squyres
[email protected]
For corporate legal information go to: 
http://www.cisco.com/web/about/doing_business/legal/cri/

Reply via email to