If you can avoid them it is better to avoid them. However it is always better to use a MPI_Alltoall than coding your own all to all with point to point, and in some algorithms you *need* to make a all to all communication. What you should understand by "avoid all to all" is not avoid MPI_alltoall, but choose a mathematic algorithm that does not need all to all.

 The algorithmic complexity of AllReduce is the same as AlltoAll.

Aurelien

Le 12 mars 08 à 17:01, Brock Palen a écrit :

I have always been told that calls like MPI_Barrior() MPI_Allreduce()
and MPI_Alltoall() should be avoided.

I understand MPI_Alltoall()  as it goes n*(n-1) sends and thus grows
very very quickly.  MPI_Barrior() is very latency sensitive and
generally is not needed in most cases I have seen it used.

But why MPI_Allreduce()?
What other functions should generally be avoided?

Sorry this is kinda off topic for the list :-)

Brock Palen
Center for Advanced Computing
bro...@umich.edu
(734)936-1985


_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users


Reply via email to