It is not running an extra KSP iteration. This "extra" matmult is normal and
occurs in many of the SNESLineSearchApply_* functions, for example,
https://petsc.org/release/src/snes/linesearch/impls/bt/linesearchbt.c.html#SNESLineSearchApply_BT
It is used to decide if the Newton step results
Hello Barry,
Thanks for your reply. The monitor options are fine. I actually meant my
modification of snes tutorial ex1f.F90 does not work and has some
unexpected behavior. I basically wanted to test if I can use a shell
matrix as my jacobian (code is here
How do I see a difference? What does "hence ruin my previous converged KSP
result" mean? A different answer at the end of the KSP solve?
$ ./joe > joe.basic
~/Src/petsc/src/ksp/ksp/tutorials (barry/2023-09-15/fix-log-pcmpi=)
arch-fix-log-pcmpi
$ ./joe -ksp_monitor -ksp_converged_reason
For a bit of assistance, you can use DMComposite and DMRedundantCreate; see
src/snes/tutorials/ex21.c and ex22.c.
Note that when computing redundantly, it's critical that the computation be
deterministic (i.e., not using atomics or randomness without matching seeds) so
the logic stays
This is a problem with MPI programming and optimization; I am unaware of a
perfect solution.
Put the design variables into the solution vector on MPI rank 0, and when
doing your objective/gradient, send the values to all the MPI processes where
you use them. You can use a VecScatter to
Dear PETSc team,
I am still trying to sort out my previous thread
https://lists.mcs.anl.gov/pipermail/petsc-users/2024-January/050079.html using
a minimal working example. However, I encountered another problem. Basically I
combined the basic usage of SNES solver and shell matrix and tried to
Hi Petsc team,
I have a question regarding parallel layout of a Petsc vector to be used in TAO
optimizers for cases where the optimization variables split into ‘design’ and
‘state’ variables (e.g. such as in PDE-constrained optimization as in tao_lcl).
In our case, the state variable naturally