> On May 30, 2019, at 11:08 PM, Manav Bhatia wrote:
>
> I managed to get this to work.
>
> I defined a larger matrix with the dense blocks appended to the end of the
> matrix on the last processor. Currently, I am only running with one extra
> unknown, so this should not be a significant
I managed to get this to work.
I defined a larger matrix with the dense blocks appended to the end of the
matrix on the last processor. Currently, I am only running with one extra
unknown, so this should not be a significant penalty for load balancing.
Since the larger matrix has the same
Hi Juanchao,
Thanks for the hints below, they will take some time to absorb as the
vectors that are being moved around
are actually partly petsc vectors and partly local process vectors.
Attached is the modified routine that now works (on leaking memory) with
openmpi.
-sanjay
On 5/30/19
Hi, Sanjay,
Could you send your modified data exchange code (psetb.F) with MPI_Waitall?
See other inlined comments below. Thanks.
On Thu, May 30, 2019 at 1:49 PM Sanjay Govindjee via petsc-users
mailto:petsc-users@mcs.anl.gov>> wrote:
Lawrence,
Thanks for taking a look! This is what I had
On 30/05/19 5:26 PM, Stefano Zampini wrote:
Matt,
redistribution with overlapped mesh is fixed in master (probably also
in maint)
Even better. Thanks very much...
- Adrian
--
Dr Adrian Croucher
Senior Research Fellow
Department of Engineering Science
University of Auckland, New Zealand
1) Correct: Placing a WaitAll before the MPI_Barrier solve the problem
in our send-get routine for OPENMPI
2) Correct: The problem persists with KSPSolve
3) Correct: WaitAll did not fix the problem in our send-get nor in
KSPSolve when using MPICH
Also correct. Commenting out the call to
Thanks for the update. So the current conclusions are that using the Waitall
in your code
1) solves the memory issue with OpenMPI in your code
2) does not solve the memory issue with PETSc KSPSolve
3) MPICH has memory issues both for your code and PETSc KSPSolve (despite) the
wait all
Great observation Lawrence.
https://www.slideshare.net/jsquyres/friends-dont-let-friends-leak-mpirequests
You can add the following option to --download-mpich
--download-mpich-configure-arguments="--enable-error-messages=all --enable-g"
then MPICH will report all MPI resources that
You most definitely want to call KSPGetConvergedReason() after every solve
KSPConvergedReason reason
call KSPSolve(ksp,b,x,ierr)
call KSPGetConvergedReason(ksp,reason,ierr)
if (reason .lt. 0) then
printf*,'KSPSolve() has not converged'
return
Hi Sanjay,
> On 30 May 2019, at 08:58, Sanjay Govindjee via petsc-users
> wrote:
>
> The problem seems to persist but with a different signature. Graphs attached
> as before.
>
> Totals with MPICH (NB: single run)
>
> For the CG/Jacobi data_exchange_total = 41,385,984;
Yes
Il Gio 30 Mag 2019, 14:36 Matthew Knepley ha scritto:
> On Thu, May 30, 2019 at 1:26 AM Stefano Zampini
> wrote:
>
>> Matt,
>>
>> redistribution with overlapped mesh is fixed in master (probably also in
>> maint)
>>
>
> Thanks! Do you just strip out the overlap cells from the partition
>
The problem seems to persist but with a different signature. Graphs
attached as before.
Totals with MPICH (NB: single run)
For the CG/Jacobi data_exchange_total = 41,385,984; kspsolve_total =
38,289,408
For the GMRES/BJACOBI data_exchange_total = 41,324,544; kspsolve_total =
Let us know how it goes with MPICH
> On May 30, 2019, at 2:01 AM, Sanjay Govindjee wrote:
>
> I put in calls to PetscMemoryGetCurrentUsage( ) around KSPSolve and my data
> exchange routine. The problem is clearly mostly in my data exchange routine.
> Attached are graphs of the change in
13 matches
Mail list logo