Pierre,
This is an external package to petsc. Shall it be listed at
https://www.mcs.anl.gov/petsc/miscellaneous/external.html?
PETSc: External Software - 
anl.gov<https://www.mcs.anl.gov/petsc/miscellaneous/external.html>
PETSc interfaces to the following optional external software (installing 
packages) (manual pages):. AMD - Approximate minimum degree orderings.; BLAS 
and LAPACK; Chaco - a graph partitioning package.; ESSL - IBM's math library 
for fast sparse direct LU factorization. FFTW - Fastest Fourier Transform in 
the West, developed at MIT by Matteo Frigo and Steven G. Johnson.
www.mcs.anl.gov
Hong
________________________________
From: Pierre Jolivet <pie...@joliv.et>
Sent: Thursday, March 18, 2021 1:16 AM
To: Zhang, Hong <hzh...@mcs.anl.gov>
Cc: For users of the development version of PETSc <petsc-dev@mcs.anl.gov>
Subject: Re: [petsc-dev] MatTransposeMatMult() bug

https://www.sciencedirect.com/science/article/abs/pii/S0898122121000055
https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/KSP/KSPHPDDM.html
https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/PC/PCHPDDM.html
I need to update the PETSc user manual though, specifically with respect to 
systems with multiple right-hand sides.
But don’t worry, Stefano has sorted the bug out, which was due to a faulty 
MatSetFromOptions() in MatMAIJ, used by MatTransposeMatMult().

Thanks,
Pierre

On 17 Mar 2021, at 11:21 PM, Zhang, Hong 
<hzh...@mcs.anl.gov<mailto:hzh...@mcs.anl.gov>> wrote:

What is hpddm? I do not see its document.
Hong

________________________________
From: Matthew Knepley <knep...@gmail.com<mailto:knep...@gmail.com>>
Sent: Wednesday, March 17, 2021 2:49 PM
To: Zhang, Hong <hzh...@mcs.anl.gov<mailto:hzh...@mcs.anl.gov>>
Cc: Pierre Jolivet <pie...@joliv.et<mailto:pie...@joliv.et>>; For users of the 
development version of PETSc 
<petsc-dev@mcs.anl.gov<mailto:petsc-dev@mcs.anl.gov>>
Subject: Re: [petsc-dev] MatTransposeMatMult() bug

On Wed, Mar 17, 2021 at 3:27 PM Zhang, Hong via petsc-dev 
<petsc-dev@mcs.anl.gov<mailto:petsc-dev@mcs.anl.gov>> wrote:
Pierre,
Do you mean a possible bug in C=AtB MatTransposeMatMult()?
Can you provide a stand-alone test without hpddm that reproduces this error?

Hong, you should be able to just configure with --download-hpddm and then run 
that ex76 test.

  Thanks,

     Matt

Hong
________________________________
From: petsc-dev 
<petsc-dev-boun...@mcs.anl.gov<mailto:petsc-dev-boun...@mcs.anl.gov>> on behalf 
of Pierre Jolivet <pie...@joliv.et<mailto:pie...@joliv.et>>
Sent: Wednesday, March 17, 2021 4:31 AM
To: For users of the development version of PETSc 
<petsc-dev@mcs.anl.gov<mailto:petsc-dev@mcs.anl.gov>>
Subject: [petsc-dev] MatTransposeMatMult() bug

Hello,
While trying out Stefano’s PCApplyMat_MG() code (*), we stumbled upon weird 
numerical errors when reusing a Mat for both MatProduct_AB and MatProduct_AtB.
This reminded me that there has been a long-standing issue with 
MatTransposeMatMult(), see 
https://www.mcs.anl.gov/petsc/petsc-current/src/ksp/pc/impls/hpddm/hpddm.cxx.html#line608,
 that I never looked into.
I’ve now been trying to figure this out, because this has side effects in 
multiple places (PCMG and PCHPDDM at least), and thus could impact user-code as 
well?
With this commit: 
https://gitlab.com/petsc/petsc/-/commit/03d8bd538039defc2fcc3e37d523735c4aaceba0
+
$ mpirun -n 4 src/ksp/ksp/tutorials/ex76 -ksp_converged_reason -pc_type hpddm 
-pc_hpddm_levels_1_eps_nev 20 -ksp_type preonly -mat_type aij -load_dir 
${DATAFILESPATH}/matrices/hpddm/GENEO -rhs 2 -pc_hpddm_coarse_correction 
balanced -C_input_mattransposematmult -D_output_mattransposematmult
I’m seeing that C is nonzero, but D is full of zeros.
Mat Object: 4 MPI processes
  type: mpidense
5.7098316584361917e-08 1.0159399260517841e-07
1.5812349976211856e-07 2.0688121715350138e-07
2.4887556933361981e-08 4.8111092300772958e-08
1.4606298643602107e-07 1.7213611729839211e-07
[…]
Mat Object: 4 MPI processes
  type: mpidense
0.0000000000000000e+00 0.0000000000000000e+00
0.0000000000000000e+00 0.0000000000000000e+00
0.0000000000000000e+00 0.0000000000000000e+00
0.0000000000000000e+00 0.0000000000000000e+00
[…]

If one switches to a MatType which has no MatProduct_AtB implementation with B 
of type MPIDense (reminder: in that case, the product is computed 
column-by-column), e.g., -mat_type sbaij, one gets the expected result.
Mat Object: 4 MPI processes
  type: mpidense
7.2003197398135299e-01 9.5191869895699011e-01
6.1793966541680234e-02 9.3884397585488877e-01
1.0022337823233585e-02 2.4653068080134588e-01
1.4463931936094099e-01 8.6111517670701687e-01

Is there a bug somewhere with the MatAIJ implementation, or am I doing 
something which is not allowed by the MatProduct() machinery?

Thanks,
Pierre

(*) https://gitlab.com/petsc/petsc/-/merge_requests/3717


--
What most experimenters take for granted before they begin their experiments is 
infinitely more interesting than any results to which their experiments lead.
-- Norbert Wiener

https://www.cse.buffalo.edu/~knepley/<http://www.cse.buffalo.edu/~knepley/>

Reply via email to