Barry Smith <[email protected]> writes:
>> Meh,
>> 
>>  VecNormBegin(X,&request1x);
>>  VecNormBegin(Y,&request1y);
>>  VecNormEnd(X,request1x,&norm);
>>  VecAXPY(Y,-1,X);
>>  VecNormBegin(Y,&request2y);
>>  VecNormEnd(Y,request2y,&norm2y);
>>  VecNormEnd(Y,request1y,&norm1y);
>
>    I don't understand what you are getting at here. You don't seem to be 
> understanding my use case where multiple inner products/norms share the same 
> MPI communication (which was the original reason for VecNormBegin/End) see 
> for example KSPSolve_CR
>
>     Are you somehow (incompetently) saying that the first two VecNorms
>     somehow share the same parallel communication (even though they
>     have different request values) while the third Norm has its own
>     MPI communication. 

Yeah, same as now.  Every time you call *Begin() using a communicator,
you get a new request for something in that "batch".  When the batch is
closed, either by a *End() or PetscCommSplitReductionBegin(), any future
*Begin() calls will go into a new batch.  The old batch wouldn't be
collected until all of its requests have been *End()ed.

>     Please explain how this works? Because an End was done somehow the
>     next Begin knows to create an entirely new reduction object that
>     it tracks (while the old reduction is kept around (where?) to
>     complete all the first phase requests?)

Yeah, I don't think it's hard to implement, but requires some
refactoring of PetscSplitReduction.

>    I am ok with this model if it can be implemented.

Attachment: signature.asc
Description: PGP signature

Reply via email to