On Wed, Sep 23, 2020 at 3:03 PM Junchao Zhang
wrote:
> DMPlex has MPI_Type_create_struct(). But for matrices and vectors, we
> only use MPIU_SCALAR.
>
We create small datatypes for pairs of things and such. The Plex usage is
also for a very small type. Most of this is done to save multiple
redu
DMPlex has MPI_Type_create_struct(). But for matrices and vectors, we only
use MPIU_SCALAR.
In petsc, we always pack non-contiguous data before calling MPI, since most
indices are irregular. Using MPI_Type_indexed() etc probably does not
provide any benefit.
The only place I can think of that can
The Ohio mvapich people are working on getting better performance out of MPI
datatypes. I notice that there are 5 million lines in the petsc source that
reference MPI datatypes. So just as a wild guess:
Optimizations on MPI Datatypes seem to be beneficial mostly if you’re sending
blocks of at l