Sorry this code has not been changed.
Barry
> On Sep 30, 2019, at 4:24 PM, Sajid Ali
> wrote:
>
> Hi PETSc-developers,
>
> Has this bug been fixed in the new 3.12 release ?
>
> Thank You,
> Sajid Ali
> Applied Physics
> Northwestern University
> s-sajid-ali.github.io
Hi PETSc-developers,
Has this bug been fixed in the new 3.12 release ?
Thank You,
Sajid Ali
Applied Physics
Northwestern University
s-sajid-ali.github.io
There is no harm in having the GMRES there even if you use a direct solver
(for testing) so just leave the GMRES. Changing to preonly every time you try
LU is prone to error if you forget to change back.
Barry
> On May 22, 2019, at 2:45 PM, Sajid Ali via petsc-users
> wrote:
>
> Hi
> On May 22, 2019, at 2:26 PM, Sajid Ali via petsc-users
> wrote:
>
> Hi Hong,
>
> Looks like this is my fault since I'm using -ksp_type preonly -pc_type gamg.
> If I use the default ksp (GMRES) then everything works fine on a smaller
> problem.
>
> Just to confirm, -ksp_type preonly
Hi Matt,
Thanks for the explanation. That makes sense since I'd get reasonably close
convergence with preonly sometimes and not at other times which was
confusing.
Anyway, since there's no pc_tol (analogous to ksp_rtol/ksp_atol, etc), I'd
have to more carefully set the gamg preconditioner
Hi Hong,
Looks like this is my fault since I'm using -ksp_type preonly -pc_type
gamg. If I use the default ksp (GMRES) then everything works fine on a
smaller problem.
Just to confirm, -ksp_type preonly is to be used only with direct-solve
preconditioners like LU,Cholesky, right ?
Thank You,
Sajid,
I have also rested the simpler problem you provided. The branch
hongzh/fix-computejacobian gives exactly the same numerical results as the
master branch does, but runs much faster. So the solver seems to work correctly.
To rule out the possible compiler issues, you might want to try a
Hi Hong,
The solution has the right characteristics but it's off by many orders of
magnitude. It is ~3.5x faster as before.
Am I supposed to keep the TSRHSJacobianSetReuse function or not?
Thank You,
Sajid Ali
Applied Physics
Northwestern University
> On May 16, 2019, at 8:04 PM, Sajid Ali
> wrote:
>
> While there is a ~3.5X speedup, deleting the aforementioned 20 lines also
> leads the new version of petsc to give the wrong solution (off by orders of
> magnitude for the same program).
Ok, sorry about this. Unfortunately this
Hi Sajid,
Can you please try this branch hongzh/fix-computejacobian quickly and see if it
makes a difference?
Thanks,
Hong (Mr.)
On May 16, 2019, at 8:04 PM, Sajid Ali via petsc-users
mailto:petsc-users@mcs.anl.gov>> wrote:
While there is a ~3.5X speedup, deleting the aforementioned 20 lines
While there is a ~3.5X speedup, deleting the aforementioned 20 lines also
leads the new version of petsc to give the wrong solution (off by orders of
magnitude for the same program).
I tried switching over the the IFunction/IJacobian interface as per the
manual (page 146) which the following
Hi Barry,
Thanks a lot for pointing this out. I'm seeing ~3X speedup in time !
Attached are the new log files. Does everything look right ?
Thank You,
Sajid Ali
Applied Physics
Northwestern University
out_50
Description: Binary data
out_100
Description: Binary data
Sajid,
This is a huge embarrassing performance bug in PETSc
https://bitbucket.org/petsc/petsc/issues/293/refactoring-of-ts-handling-of-reuse-of
It is using 74 percent of the time to perform MatAXPY() on two large sparse
matrices, not knowing they have identical nonzero patterns and
Hi PETSc developers,
I have a question about TSComputeRHSJacobianConstant. If I create a TS (of
type linear) for a problem where the jacobian does not change with time
(set with the aforementioned option) and run it for different number of
time steps, why does the time it takes to evaluate the
14 matches
Mail list logo