On Wed, Dec 1, 2021 at 9:32 AM Barry Smith wrote:
>
> PETSc uses Elemental to perform such operations.
>
> PetscErrorCode MatMatMultNumeric_Elemental(Mat A,Mat B,Mat C)
> {
> Mat_Elemental*a = (Mat_Elemental*)A->data;
> Mat_Elemental*b = (Mat_Elemental*)B->data;
> Mat_Elemental
PETSc uses Elemental to perform such operations.
PetscErrorCode MatMatMultNumeric_Elemental(Mat A,Mat B,Mat C)
{
Mat_Elemental*a = (Mat_Elemental*)A->data;
Mat_Elemental*b = (Mat_Elemental*)B->data;
Mat_Elemental*c = (Mat_Elemental*)C->data;
PetscElemScalar one = 1,zero =
Hello,
I am interested in the communication scheme Petsc uses for the
multiplication of dense, parallel distributed matrices in MatMatMult. Is
it based on collective communication or on single calls to
MPI_Send/Recv, and is it done in a blocking or a non-blocking way? How
do you make sure th
MatMatMult(X,Y,...,...,Z)
where X is MPIDENSE and Y is MPIAIJ seems to work but
when X is MPIAIJ and Y is MPIDENSE doesn't ? says its not supported.
there seems to be all permuations in the source
MatMatMult_MPIAIJ_MPIDense
MatMatMult_MPIDense_MPIAIJ
MatMatMult_MPIAIJ_MPIAIJ
MatMatMult_MPI