I updated the branch and made a PR. I tried to do MPI_SUM on MPI_CHAR. We do 
not have UnpackAdd on this type (we are right). But unfortunately, MPICH's 
MPI_Reduce_local did not report errors (it should) so we did not generate an 
error either.

--Junchao Zhang


On Thu, Apr 4, 2019 at 10:37 AM Jed Brown 
<j...@jedbrown.org<mailto:j...@jedbrown.org>> wrote:
Fande Kong via petsc-users 
<petsc-users@mcs.anl.gov<mailto:petsc-users@mcs.anl.gov>> writes:

> Hi Jed,
>
> One more question. Is it fine to use the same SF to exchange two groups of
> data at the same time? What is the better way to do this

This should work due to the non-overtaking property defined by MPI.

> Fande Kong,
>
>  ierr =
> PetscSFReduceBegin(ptap->sf,MPIU_INT,rmtspace,space,MPIU_REPLACE);CHKERRQ(ierr);
>  ierr =
> PetscSFReduceBegin(ptap->sf,MPI_CHAR,rmtspace2,space2,MPIU_REPLACE);CHKERRQ(ierr);
>  Doing some calculations
>  ierr =
> PetscSFReduceEnd(ptap->sf,MPIU_INT,rmtspace,space,MPIU_REPLACE);CHKERRQ(ierr);
>  ierr =
> PetscSFReduceEnd(ptap->sf,MPI_CHAR,rmtspace2,space2,MPIU_REPLACE);CHKERRQ(ierr);

Reply via email to