Khalid,

The decision is rechecked every time we create a new communicator. So, you 
might create a solution that force the algorithm to whatever you think it is 
best (using the environment variables you mentioned), then create a 
communicator, and free it once you’re done.

I have no idea what you’re trying to achieve, but be aware there is a burden 
associated with the creation of a communicator, so this approach might outweigh 
the benefits.

  George.

> On Mar 10, 2015, at 10:31 , Gilles Gouaillardet 
> <gilles.gouaillar...@gmail.com> wrote:
> 
> Khalid,
> 
> i am not aware of such a mechanism.
> 
> /* there might be a way to use MPI_T_* mechanisms to force the algorithm,
> and i will let other folks comment on that */
> 
> you definetly cannot directly invoke ompi_coll_tuned_bcast_intra_binomial
> (abstraction violation, non portable, and you miss the some parameters)
> 
> out of curiosity, what do you have in mind for (some_condition) ?
> /* since it seems implicit (some_condition) is independant of communicator 
> size, and message size */
> 
> Cheers,
> 
> Gilles
> 
> On Tue, Mar 10, 2015 at 10:04 PM, Khalid Hasanov <xali...@gmail.com 
> <mailto:xali...@gmail.com>> wrote:
> Hello,
> 
> I would like to know if Open MPI provides some kind of mechanism to select 
> collective algorithms such as MPI broadcast during run time depending on some 
> logic. For example, I would like to
> use something like this:
> 
> if (some_condition)  ompi_binomial_broadcast(...);
> else   ompi_pipeline_broadcast(...);
> 
> I know it is possible to use some fixed algorithm by 
> coll_tuned_use_dynamic_rules or to define a custom selection rule using 
> coll_tuned_dynamic_rules_filename. But
> I think it is not suitable in this situation as the dynamic rules mainly 
> based on the message size, segment size and communicator size.
> 
> Another option could be using Open MPI internal APIS like 
> 
> ompi_coll_tuned_bcast_intra_binomial( buf, count, dtype, root, comm, module, 
> segsize);
> 
> But it highly depends on Open MPI internals as it uses mca_coll_base_module_t 
> . 
> 
> Is there any better option? (except using my own implementation of 
> collectives)
> 
> Any suggestion highly appreciated.
> 
> Thanks
> 
> Regards,
> Khalid
> 
> _______________________________________________
> users mailing list
> us...@open-mpi.org <mailto:us...@open-mpi.org>
> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users 
> <http://www.open-mpi.org/mailman/listinfo.cgi/users>
> Link to this post: 
> http://www.open-mpi.org/community/lists/users/2015/03/26446.php 
> <http://www.open-mpi.org/community/lists/users/2015/03/26446.php>
> 
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
> Link to this post: 
> http://www.open-mpi.org/community/lists/users/2015/03/26447.php

Reply via email to