> On Aug 17, 2015, at 9:53 PM, Fande Kong <[email protected]> wrote:
> 
> Thanks, Barry, Satish,
> 
> But, is it possible to uniform the use of MPI_SUM and MPIU_SUM? For example, 
> we could let  a Petsc function just switch to a regular MPI_Reduce or other 
> function when using PetscInt. In other words, we need a wrapper. I always use 
> MPIU_INT in a MPI function  when using PetscInt. It is very straightforward 
> to use MPIU_SUM, MPIU_MAX so on, when thinking about we are using MPIU_INT.

  We could add code to the routine that gets called when one uses  MPIU_SUM 
which is PetscSum_Local() and defined in pinit.c to handle all possible data 
types then you could always use MPIU_SUM. The reason we don't is that using a 
user provide reduction such as PetscSum_Local() will ALWAYS be less efficient 
then using the MPI built in reduction operations. For integers which MPI can 
always handle we prefer to us the fastest possible which is the built in 
operation for summing. Now likely the time difference between the user provided 
one vs the built in one is too small to measure, I agree, but for me it is easy 
enough just to remember that MPIU_SUM is only needed for floating pointer 
numbers not integers.


  Barry

> 
> Thanks,
> 
> Fande Kong,
> 
> On Mon, Aug 17, 2015 at 6:18 PM, Barry Smith <[email protected]> wrote:
> 
>   It is crucial. MPI also doesn't provide sums for __float128 precision. But 
> MPI does always provide sums for 32 and 64 bit integers so no need for 
> MPIU_SUM for PETSC_INT
> 
> 
> > On Aug 17, 2015, at 5:49 PM, Satish Balay <[email protected]> wrote:
> >
> > I think some MPI impls didn't provide some of the ops on MPI_COMPLEX
> > datatype.
> >
> > So petsc provides these ops for PetscReal i.e MPIU_SUM, MPIU_MAX, MPIU_MIN
> >
> > Satish
> >
> > On Mon, 17 Aug 2015, Fande Kong wrote:
> >
> >> Hi all,
> >>
> >> I was wondering why, in Petsc,  MPI_Reduce with PetscInt needs MPI_SUM
> >> meanwhile MPI_Reduce with PetscReal needs MPIU_SUM? Do we have any special
> >> reasons to distinguish them?
> >>
> >> Thanks,
> >>
> >> Fande Kong,
> >>
> >
> 
> 

Reply via email to