Let me clarify one thing,

When I said "there are q-1 groups that can communicate in parallel at the
same time" I meant that this is possible at any particular time. So at the
beginning we have q-1 groups that could communicate in parallel, then
another set of q-1 groups and so on until we exhaust all groups. My hope is
that the speedup can be such that the total number of broadcasts i.e.
[q^(k-1)]*(q-1)*k
to be executed in time equivalent to only [q^(k-1)]*k broadcasts.

Cheers,
Kostas.

On Tue, Oct 31, 2017 at 10:42 PM, Konstantinos Konstantinidis <
kostas1...@gmail.com> wrote:

> Assume that we have K=q*k nodes (slaves) where q,k are positive integers
> >= 2.
>
> Based on the scheme that I am currently using I create [q^(k-1)]*(q-1)
> groups (along with their communicators). Each group consists of k nodes and
> within each group exactly k broadcasts take place (each node broadcasts
> something to the rest of them). So in total [q^(k-1)]*(q-1)*k MPI
> broadcasts take place. Let me skip the details of the above scheme.
>
> Now theoretically I figured out that there are q-1 groups that can
> communicate in parallel at the same time i.e. groups that have no common
> nodes and I would like to utilize that to speedup the shuffling. I have
> seen here https://stackoverflow.com/questions/11372012/mpi-severa
> l-broadcast-at-the-same-time that this is possible in MPI.
>
> In my case it's more complicated since q,k are parameters of the problem
> and change between different experiments. If I get the idea about the 2nd
> method that is proposed there and assume that we have only 3 groups within
> which some communication takes places one can simply do:
>
> *if my rank belongs to group 1{*
> *    comm1.Bcast(..., ..., ..., rootId);*
> *}else if my rank belongs to group 2{*
> *    comm2.Bcast(..., ..., ..., rootId);*
> *}else if my rank belongs to group3{*
> *    comm3.Bcast(..., ..., ..., rootId);*
> *} *
>
> where comm1, comm2, comm3 are the corresponding sub-communicators that
> contain only the members of each group.
>
> But how can I generalize the above idea to arbitrary number of groups or
> perhaps do something else?
>
> The code is in C++ and the MPI installed is described in the attached file.
>
> Regards,
> Kostas
>
>
_______________________________________________
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users

Reply via email to