Hi Mark,

Yes it doesn't seem to be usable. Unfortunately we're aiming to do 3D so direct solvers are not a viable solution and PETSc' ILU is not parallel and we can't use HYPRE (complex arithmetic)

Thibaut

On 01/11/2018 20:42, Mark Adams wrote:


On Wed, Oct 31, 2018 at 8:11 PM Smith, Barry F. <bsm...@mcs.anl.gov <mailto:bsm...@mcs.anl.gov>> wrote:



    > On Oct 31, 2018, at 5:39 PM, Appel, Thibaut via petsc-users
    <petsc-users@mcs.anl.gov <mailto:petsc-users@mcs.anl.gov>> wrote:
    >
    > Well yes naturally for the residual but adding
    -ksp_true_residual just gives
    >
    >   0 KSP unpreconditioned resid norm 3.583290589961e+00 true
    resid norm 3.583290589961e+00 ||r(i)||/||b|| 1.000000000000e+00
    >   1 KSP unpreconditioned resid norm 0.000000000000e+00 true
    resid norm 3.583290589961e+00 ||r(i)||/||b|| 1.000000000000e+00
    > Linear solve converged due to CONVERGED_ATOL iterations 1

       Very bad stuff is happening in the preconditioner. The
    preconditioner must have a null space (which it shouldn't have to
    be a useful preconditioner).


Yea, you are far away from an optimal preconditioner for this system. In low frequency (indefinite) Helmholtz is very very hard. Now, something very bad is going on here but even if you fix it standard AMG is not good for these problems. I would use direct solvers or grind away it with ILU.


    >
    > Mark - if that helps - a Poisson equation is used for the
    pressure so the Helmholtz is the same as for the velocity in the
    interior.
    >
    > Thibaut
    >
    >> Le 31 oct. 2018 à 21:05, Mark Adams <mfad...@lbl.gov
    <mailto:mfad...@lbl.gov>> a écrit :
    >>
    >> These are indefinite (bad) Helmholtz problems. Right?
    >>
    >> On Wed, Oct 31, 2018 at 2:38 PM Matthew Knepley
    <knep...@gmail.com <mailto:knep...@gmail.com>> wrote:
    >> On Wed, Oct 31, 2018 at 2:13 PM Thibaut Appel
    <t.appe...@imperial.ac.uk <mailto:t.appe...@imperial.ac.uk>> wrote:
    >> Hi Mark, Matthew,
    >>
    >> Thanks for taking the time.
    >>
    >> 1) You're not suggesting having -fieldsplit_X_ksp_type fgmres
    for each field, are you?
    >>
    >> 2) No, the matrix has pressure in one of the fields. Here it's
    a 2D problem (but we're also doing 3D), the unknowns are (p,u,v)
    and those are my 3 fields. We are dealing with subsonic/transsonic
    flows so it is convection dominated indeed.
    >>
    >> 3) We are in frequency domain with respect to time, i.e.
    \partial{phi}/\partial{t} = -i*omega*phi.
    >>
    >> 4) Hypre is unfortunately not an option since we are in complex
    arithmetic.
    >>
    >>
    >>
    >>> I'm not sure about "-fieldsplit_pc_type gamg" GAMG should work
    on one block, and hence be a subpc. I'm not up on fieldsplit syntax.
    >> According to the online manual page this syntax applies the
    suffix to all the defined fields?
    >>
    >>
    >>
    >>> Mark is correct. I wanted you to change the smoother. He shows
    how to change it to Richardson (make sure you add the self-scale
    option), which is probably the best choice.
    >>>
    >>>   Thanks,
    >>>
    >>>      Matt
    >>
    >> You did tell me to set it to GMRES if I'm not mistaken, that's
    why I tried "-fieldsplit_mg_levels_ksp_type gmres" (mentioned in
    the email). Also, it wasn't clear whether these should be applied
    to each block or the whole system, as the online manual pages +
    .pdf manual barely mention smoothers and how to manipulate MG
    objects with KSP/PC, this especially with PCFIELDSPLIT where
    examples are scarce.
    >>
    >> From what I can gather from your suggestions I tried (lines
    with X are repeated for X={0,1,2})
    >>
    >> This looks good. How can an identically zero vector produce a 0
    residual? You should always monitor with
    >>
    >>   -ksp_monitor_true_residual.
    >>
    >>    Thanks,
    >>
    >>     Matt
    >> -ksp_view_pre -ksp_monitor -ksp_converged_reason \
    >> -ksp_type fgmres -ksp_rtol 1.0e-8 \
    >> -pc_type fieldsplit \
    >> -pc_fieldsplit_type multiplicative \
    >> -pc_fieldsplit_block_size 3 \
    >> -pc_fieldsplit_0_fields 0 \
    >> -pc_fieldsplit_1_fields 1 \
    >> -pc_fieldsplit_2_fields 2 \
    >> -fieldsplit_X_pc_type gamg \
    >> -fieldsplit_X_ksp_type gmres \
    >> -fieldsplit_X_ksp_rtol 1e-10 \
    >> -fieldsplit_X_mg_levels_ksp_type richardson \
    >> -fieldsplit_X_mg_levels_pc_type sor \
    >> -fieldsplit_X_pc_gamg_agg_nsmooths 0 \
    >> -fieldsplit_X_mg_levels_ksp_richardson_self_scale \
    >> -log_view
    >>
    >> which yields
    >>
    >> KSP Object: 1 MPI processes
    >>   type: fgmres
    >>     restart=30, using Classical (unmodified) Gram-Schmidt
    Orthogonalization with no iterative refinement
    >>     happy breakdown tolerance 1e-30
    >>   maximum iterations=10000, initial guess is zero
    >>   tolerances:  relative=1e-08, absolute=1e-50, divergence=10000.
    >>   left preconditioning
    >>   using DEFAULT norm type for convergence test
    >> PC Object: 1 MPI processes
    >>   type: fieldsplit
    >>   PC has not been set up so information may be incomplete
    >>     FieldSplit with MULTIPLICATIVE composition: total splits =
    3, blocksize = 3
    >>     Solver info for each split is in the following KSP objects:
    >>   Split number 0 Fields  0
    >>   KSP Object: (fieldsplit_0_) 1 MPI processes
    >>     type: preonly
    >>     maximum iterations=10000, initial guess is zero
    >>     tolerances:  relative=1e-05, absolute=1e-50, divergence=10000.
    >>     left preconditioning
    >>     using DEFAULT norm type for convergence test
    >>   PC Object: (fieldsplit_0_) 1 MPI processes
    >>     type not yet set
    >>     PC has not been set up so information may be incomplete
    >>   Split number 1 Fields  1
    >>   KSP Object: (fieldsplit_1_) 1 MPI processes
    >>     type: preonly
    >>     maximum iterations=10000, initial guess is zero
    >>     tolerances:  relative=1e-05, absolute=1e-50, divergence=10000.
    >>     left preconditioning
    >>     using DEFAULT norm type for convergence test
    >>   PC Object: (fieldsplit_1_) 1 MPI processes
    >>     type not yet set
    >>     PC has not been set up so information may be incomplete
    >>   Split number 2 Fields  2
    >>   KSP Object: (fieldsplit_2_) 1 MPI processes
    >>     type: preonly
    >>     maximum iterations=10000, initial guess is zero
    >>     tolerances:  relative=1e-05, absolute=1e-50, divergence=10000.
    >>     left preconditioning
    >>     using DEFAULT norm type for convergence test
    >>   PC Object: (fieldsplit_2_) 1 MPI processes
    >>     type not yet set
    >>     PC has not been set up so information may be incomplete
    >>   linear system matrix = precond matrix:
    >>   Mat Object: 1 MPI processes
    >>     type: seqaij
    >>     rows=52500, cols=52500
    >>     total: nonzeros=1127079, allocated nonzeros=1128624
    >>     total number of mallocs used during MatSetValues calls =0
    >>       not using I-node routines
    >>   0 KSP Residual norm 3.583290589961e+00
    >>   1 KSP Residual norm 0.000000000000e+00
    >> Linear solve converged due to CONVERGED_ATOL iterations 1
    >>
    >> so something must not be set correctly. The solution is
    identically zero everywhere.
    >>
    >> Is that option list what you meant? If you could let me know
    what should be corrected.
    >>
    >>
    >>
    >> Thanks for your support,
    >>
    >>
    >>
    >> Thibaut
    >>
    >>
    >>
    >> On 31/10/2018 16:43, Mark Adams wrote:
    >>>
    >>>
    >>> On Tue, Oct 30, 2018 at 5:23 PM Appel, Thibaut via petsc-users
    <petsc-users@mcs.anl.gov <mailto:petsc-users@mcs.anl.gov>> wrote:
    >>> Dear users,
    >>>
    >>> Following a suggestion from Matthew Knepley I’ve been trying
    to apply fieldsplit/gamg for my set of PDEs but I’m still
    encountering issues despite various tests. pc_gamg simply won’t start.
    >>> Note that direct solvers always yield the correct, physical
    result.
    >>> Removing the fieldsplit to focus on the gamg bit and trying to
    solve the linear system on a modest size problem still gives, with
    >>>
    >>> '-ksp_monitor -ksp_rtol 1.0e-10 -ksp_gmres_restart 300
    -ksp_type gmres -pc_type gamg'
    >>>
    >>> [3]PETSC ERROR: --------------------- Error Message
    --------------------------------------------------------------
    >>> [3]PETSC ERROR: Petsc has generated inconsistent data
    >>> [3]PETSC ERROR: Have un-symmetric graph (apparently). Use
    '-(null)pc_gamg_sym_graph true' to symetrize the graph or
    '-(null)pc_gamg_threshold -1' if the matrix is structurally symmetric.
    >>>
    >>> And since then, after adding '-pc_gamg_sym_graph true' I have
    been getting
    >>> [0]PETSC ERROR: --------------------- Error Message
    --------------------------------------------------------------
    >>> [0]PETSC ERROR: Petsc has generated inconsistent data
    >>> [0]PETSC ERROR: Eigen estimator failed: DIVERGED_NANORINF at
    iteration
    >>>
    >>> -ksp_chebyshev_esteig_noisy 0/1 does not change anything
    >>>
    >>> Knowing that Chebyshev eigen estimator needs a positive
    spectrum I tried ‘-mg_levels_ksp_type gmres’ but iterations would
    just go on endlessly.
    >>>
    >>> This is OK, but you need to use '-ksp_type fgmres' (this could
    be why it is failing ...).
    >>>
    >>> It looks like your matrix is 1) just the velocity field and 2)
    very unsymmetric (eg, convection dominated). I would start with
    ‘-mg_levels_ksp_type richardson -mg_levels_pc_type sor’.
    >>>
    >>> I would also start with unsmoothed aggregation:
    '-pc_gamg_nsmooths 0'
    >>>
    >>>
    >>> It seems that I have indeed eigenvalues of rather high
    magnitude in the spectrum of my operator without being able to
    determine the reason.
    >>> The eigenvectors look like small artifacts at the wall-inflow
    or wall-outflow corners with zero anywhere else but I do not know
    how to interpret this.
    >>> Equations are time-harmonic linearized Navier-Stokes to which
    a forcing is applied, there’s no time-marching.
    >>>
    >>> You mean you are in frequency domain?
    >>>
    >>>
    >>> Matrix is formed with a MPIAIJ type. The formulation is
    incompressible, in complex arithmetic and the 2D physical domain
    is mapped to a logically rectangular,
    >>>
    >>> This kind of messes up the null space that AMG depends on but
    AMG theory is gone for NS anyway.
    >>>
    >>> regular collocated grid with a high-order finite difference
    method.
    >>> I determine the ownership of the rows/degrees of freedom of
    the matrix with PetscSplitOwnership and I’m not using DMDA.
    >>>
    >>> Our iterative solvers are probably not going to work well on
    this but you should test hypre also (-pc_type hypre -pc_hypre_type
    boomeramg). You need to configure PETSc to download hypre.
    >>>
    >>> Mark
    >>>
    >>>
    >>> The Fortran application code is memory-leak free and has
    undergone a strict verification/validation procedure for different
    variations of the PDEs.
    >>>
    >>> If there’s any problem with the matrix what could help for the
    diagnostic? At this point I’m running out of ideas so I would
    really appreciate additional suggestions and discussions.
    >>>
    >>> Thanks for your continued support,
    >>>
    >>>
    >>> Thibaut
    >>
    >>
    >> --
    >> What most experimenters take for granted before they begin
    their experiments is infinitely more interesting than any results
    to which their experiments lead.
    >> -- Norbert Wiener
    >>
    >> https://www.cse.buffalo.edu/~knepley/
    >

Reply via email to