Thank you everyone for your valuable materials and comments.
Currently, I can use a maximum of 8 nodes on a computer system with a 10 Gb
InfiniBand network.
I am applying to use all the nodes in this computer system (about 300
nodes).
It will take some time.
I also hope 300 nodes are enough to
:53 AM Patrick Sanan
wrote:
>
>
> Am 26.01.2021 um 12:01 schrieb Matthew Knepley :
>
> On Mon, Jan 25, 2021 at 11:31 PM Viet H.Q.H. wrote:
>
>> Dear Patrick Sanan,
>>
>> Thank you very much for your answer, especially for your code.
>> I was able to co
concrete idea of what I mean. Note that this was used early
> on in our own exploration of these topics so I'm only offering it to give
> an idea, not as a meaningful benchmark in its own right.
>
> Am 25.01.2021 um 09:17 schrieb Viet H.Q.H. :
>
>
> Dear Barry,
>
> Tha
I do not know what partial
> support means but you can try setting the variables and see if that helps.
>
>
>
> On Jan 22, 2021, at 11:20 AM, Viet H.Q.H. wrote:
>
>
> Dear Victor and Berry,
>
> Thank you so much for your answers.
>
> I fixed the code with th
);
> #else
> ierr =
> MPIU_Allreduce(sendbuf,recvbuf,count,datatype,op,comm);CHKERRQ(ierr);
> *request = MPI_REQUEST_NULL;
> #endif
> PetscFunctionReturn(0);
> }
>
>
> So first check if $PETSC_DIR/include/petscconf.h has
>
> PETSC_HAVE_MPI_IALLREDUCE
Hello Petsc developers and supporters,
I would like to confirm the performance of asynchronous computations of
inner product computation overlapping with matrix-vector multiplication
computation by the below code.
PetscLogDouble tt1,tt2;
KSP ksp;
//ierr = VecSet(c,one);
ierr =