I'm looking at replacing/modernizing an old application-specific MPI layer
that was written around the turn of the century (that sounds odd). A major
part of it is a mesh subdomain "halo exchange" for domain decomposition.
When I dug into the implementation I was a little surprised to see it used
point-to-point communication with ISend/Recv and rank-randomized issues of
the sends (for better performance?) rather than Alltoallv, which I think
would have been a more straightforward alternative (but my MPI
understanding is limited).  Some questions:

1) It seems that now defining a virtual topology and using
Neighbor_alltoallv is a perfect match for this problem. Is there any reason
today to not prefer this over individual send/recv?

2) I'm baffled about what I'm supposed to with the possible reordering of
ranks that MPI_Dist_Graph_create_adjacent does.  I understand the benefit
of the communication pattern between ranks being matched to the underlying
hardware topology, however the processes are already pinned (?) to specific
cores, so I'm not sure what the relevance is of the assigned rank -- it's
just a label, no?  Or am I expected to migrate my data; e.g. if old-com
rank p becomes new-com rank q, am I supposed to migrate the data from the
old rank p to old rank q before using the new com?

Thanks!
_______________________________________________
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users

Reply via email to