Jed,

     This seems wrong. It is called in KSPCreate() and seems to imply all KSP 
methods support no-norm with right preconditioning (or left for that matter). 
But CG doesn't support right period and FGMES does not support left at all.
Shouldn't those two lines be removed (and maybe they need to be added in the 
create for certain KSP methods).

PetscErrorCode KSPNormSupportTableReset_Private(KSP ksp)
{
  PetscErrorCode ierr;

  PetscFunctionBegin;
  ierr = 
PetscMemzero(ksp->normsupporttable,sizeof(ksp->normsupporttable));CHKERRQ(ierr);
  ierr = KSPSetSupportedNorm(ksp,KSP_NORM_NONE,PC_LEFT,1);CHKERRQ(ierr);
  ierr = KSPSetSupportedNorm(ksp,KSP_NORM_NONE,PC_RIGHT,1);CHKERRQ(ierr);
  ksp->pc_side  = ksp->pc_side_set;
  ksp->normtype = ksp->normtype_set;
  PetscFunctionReturn(0);
}



> On Mar 8, 2017, at 11:19 AM, Matthew Knepley <knep...@gmail.com> wrote:
> 
> On Wed, Mar 8, 2017 at 10:47 AM, Kong, Fande <fande.k...@inl.gov> wrote:
> Thanks Barry,
> 
> We are using "KSPSetPCSide(ksp, pcside)" in the code.  I just tried 
> "-ksp_pc_side right", and petsc did not error out. 
> 
> I like to understand why CG does not work with right preconditioning? 
> Mathematically, the right preconditioning does not make sense?
> 
> cd src/snes/examples/tutorials
> knepley/feature-plasma-example 
> $:/PETSc3/petsc/petsc-dev/src/snes/examples/tutorials$ ./ex5 -ksp_view 
> -ksp_type cg -ksp_pc_side right -ksp_error_if_not_converged
> KSP Object: 1 MPI processes
>   type: cg
>   maximum iterations=10000, initial guess is zero
>   tolerances:  relative=1e-05, absolute=1e-50, divergence=10000.
>   right preconditioning
>   using NONE norm type for convergence test
> PC Object: 1 MPI processes
>   type: ilu
>     out-of-place factorization
>     0 levels of fill
>     tolerance for zero pivot 2.22045e-14
>     matrix ordering: natural
>     factor fill ratio given 1., needed 1.
>       Factored matrix follows:
>         Mat Object: 1 MPI processes
>           type: seqaij
>           rows=16, cols=16
>           package used to perform factorization: petsc
>           total: nonzeros=64, allocated nonzeros=64
>           total number of mallocs used during MatSetValues calls =0
>             not using I-node routines
>   linear system matrix = precond matrix:
>   Mat Object: 1 MPI processes
>     type: seqaij
>     rows=16, cols=16
>     total: nonzeros=64, allocated nonzeros=64
>     total number of mallocs used during MatSetValues calls =0
>       not using I-node routines
> [0]PETSC ERROR: --------------------- Error Message 
> --------------------------------------------------------------
> [0]PETSC ERROR:   
> [0]PETSC ERROR: KSPSolve has not converged
> [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for 
> trouble shooting.
> [0]PETSC ERROR: Petsc Development GIT revision: v3.7.5-3127-ge9f6087  GIT 
> Date: 2017-02-11 13:06:34 -0600
> [0]PETSC ERROR: ./ex5 on a arch-c-exodus-master named 
> MATTHEW-KNEPLEYs-MacBook-Air-2.local by knepley Wed Mar  8 11:17:43 2017
> [0]PETSC ERROR: Configure options --PETSC_ARCH=arch-c-exodus-master 
> --download-chaco --download-cmake --download-ctetgen --download-eigen 
> --download-exodusii --download-gmp --download-hdf5 --download-metis 
> --download-mpfr --download-mpich --download-netcdf --download-p4est 
> --download-parmetis --download-pragmatic --download-triangle --useThreads=1 
> --with-cc="/Users/knepley/MacSoftware/bin/ccache gcc -Qunused-arguments" 
> --with-cxx="/Users/knepley/MacSoftware/bin/ccache g++ -Qunused-arguments" 
> --with-fc="/Users/knepley/MacSoftware/bin/ccache gfortran" 
> --with-shared-libraries
> [0]PETSC ERROR: #1 KSPSolve() line 847 in 
> /PETSc3/petsc/petsc-dev/src/ksp/ksp/interface/itfunc.c
> [0]PETSC ERROR: #2 SNESSolve_NEWTONLS() line 224 in 
> /PETSc3/petsc/petsc-dev/src/snes/impls/ls/ls.c
> [0]PETSC ERROR: #3 SNESSolve() line 3967 in 
> /PETSc3/petsc/petsc-dev/src/snes/interface/snes.c
> [0]PETSC ERROR: #4 main() line 187 in 
> /PETSc3/petsc/petsc-dev/src/snes/examples/tutorials/ex5.c
> [0]PETSC ERROR: PETSc Option Table entries:
> [0]PETSC ERROR: -ksp_error_if_not_converged
> [0]PETSC ERROR: -ksp_pc_side right
> [0]PETSC ERROR: -ksp_type cg
> [0]PETSC ERROR: -ksp_view
> [0]PETSC ERROR: -malloc_test
> [0]PETSC ERROR: ----------------End of Error Message -------send entire error 
> message to petsc-ma...@mcs.anl.gov----------
> 
> So we are not getting an error
> 
>   Matt
>  
> 
> Fande,
> 
> On Wed, Mar 8, 2017 at 9:33 AM, Barry Smith <bsm...@mcs.anl.gov> wrote:
> 
>   Please tell us how you got this output.
> 
>   PETSc CG doesn't even implement right preconditioning. If you ask for it it 
> should error out. CG supports no norm computation with left preconditioning.
> 
>    Barry
> 
> > On Mar 8, 2017, at 10:26 AM, Kong, Fande <fande.k...@inl.gov> wrote:
> >
> > Hi All,
> >
> > The NONE norm type is supported only when CG is used with a right 
> > preconditioner. Any reason for this?
> >
> >
> >
> > 0 Nonlinear |R| = 1.732051e+00
> >       0 Linear |R| = 0.000000e+00
> >       1 Linear |R| = 0.000000e+00
> >       2 Linear |R| = 0.000000e+00
> >       3 Linear |R| = 0.000000e+00
> >       4 Linear |R| = 0.000000e+00
> >       5 Linear |R| = 0.000000e+00
> >       6 Linear |R| = 0.000000e+00
> >  1 Nonlinear |R| = 1.769225e-08
> >       0 Linear |R| = 0.000000e+00
> >       1 Linear |R| = 0.000000e+00
> >       2 Linear |R| = 0.000000e+00
> >       3 Linear |R| = 0.000000e+00
> >       4 Linear |R| = 0.000000e+00
> >       5 Linear |R| = 0.000000e+00
> >       6 Linear |R| = 0.000000e+00
> >       7 Linear |R| = 0.000000e+00
> >       8 Linear |R| = 0.000000e+00
> >       9 Linear |R| = 0.000000e+00
> >      10 Linear |R| = 0.000000e+00
> >  2 Nonlinear |R| = 0.000000e+00
> > SNES Object: 1 MPI processes
> >   type: newtonls
> >   maximum iterations=50, maximum function evaluations=10000
> >   tolerances: relative=1e-08, absolute=1e-50, solution=1e-50
> >   total number of linear solver iterations=18
> >   total number of function evaluations=23
> >   norm schedule ALWAYS
> >   SNESLineSearch Object:   1 MPI processes
> >     type: bt
> >       interpolation: cubic
> >       alpha=1.000000e-04
> >     maxstep=1.000000e+08, minlambda=1.000000e-12
> >     tolerances: relative=1.000000e-08, absolute=1.000000e-15, 
> > lambda=1.000000e-08
> >     maximum iterations=40
> >   KSP Object:   1 MPI processes
> >     type: cg
> >     maximum iterations=10000, initial guess is zero
> >     tolerances:  relative=1e-05, absolute=1e-50, divergence=10000.
> >     right preconditioning
> >     using NONE norm type for convergence test
> >   PC Object:   1 MPI processes
> >     type: hypre
> >       HYPRE BoomerAMG preconditioning
> >       HYPRE BoomerAMG: Cycle type V
> >       HYPRE BoomerAMG: Maximum number of levels 25
> >       HYPRE BoomerAMG: Maximum number of iterations PER hypre call 1
> >       HYPRE BoomerAMG: Convergence tolerance PER hypre call 0.
> >       HYPRE BoomerAMG: Threshold for strong coupling 0.25
> >       HYPRE BoomerAMG: Interpolation truncation factor 0.
> >       HYPRE BoomerAMG: Interpolation: max elements per row 0
> >       HYPRE BoomerAMG: Number of levels of aggressive coarsening 0
> >       HYPRE BoomerAMG: Number of paths for aggressive coarsening 1
> >       HYPRE BoomerAMG: Maximum row sums 0.9
> >       HYPRE BoomerAMG: Sweeps down         1
> >       HYPRE BoomerAMG: Sweeps up           1
> >       HYPRE BoomerAMG: Sweeps on coarse    1
> >       HYPRE BoomerAMG: Relax down          symmetric-SOR/Jacobi
> >       HYPRE BoomerAMG: Relax up            symmetric-SOR/Jacobi
> >       HYPRE BoomerAMG: Relax on coarse     Gaussian-elimination
> >       HYPRE BoomerAMG: Relax weight  (all)      1.
> >       HYPRE BoomerAMG: Outer relax weight (all) 1.
> >       HYPRE BoomerAMG: Using CF-relaxation
> >       HYPRE BoomerAMG: Not using more complex smoothers.
> >       HYPRE BoomerAMG: Measure type        local
> >       HYPRE BoomerAMG: Coarsen type        Falgout
> >       HYPRE BoomerAMG: Interpolation type  classical
> >     linear system matrix followed by preconditioner matrix:
> >     Mat Object:     1 MPI processes
> >       type: mffd
> >       rows=9, cols=9
> >         Matrix-free approximation:
> >           err=1.49012e-08 (relative error in function evaluation)
> >           Using wp compute h routine
> >               Does not compute normU
> >     Mat Object:    ()     1 MPI processes
> >       type: seqaij
> >       rows=9, cols=9
> >       total: nonzeros=49, allocated nonzeros=49
> >       total number of mallocs used during MatSetValues calls =0
> >         not using I-node routines
> >
> > Fande,
> >
> 
> 
> 
> 
> 
> -- 
> What most experimenters take for granted before they begin their experiments 
> is infinitely more interesting than any results to which their experiments 
> lead.
> -- Norbert Wiener

Reply via email to