On Sat, Sep 2, 2023 at 6:58 PM David Knezevic <david.kneze...@akselos.com>
wrote:

> Hmm, it sounds like the convergence measure is bad. Maybe using a weighted
>> norm would be better?
>
>
> That's a good thought, I'd like to look into that idea too. Could you
> please give me some guidance on how to use a weighted norm in the
> convergence test? (Or are there any examples of doing that in the example
> suite?)
>

I guess the idea would be the following:

   You have exponential terms in your equation. You get a relatively small
residual, but unacceptable
   error. This leads me to believe that some part of the residual is
swamping other parts you care about.
   This should be confirmed by plotting the residual for one of these
0-iterate cases. To prevent this
   imbalance, you would scale the residual to give more weight to the
underrepresented parts, probably
   with an inverse of the exponential. This is just a more complicated form
of the diagonal scaling
   commonly used to give all fields the same scale.

  Thanks,

      Matt


> Thanks,
> David
>
> On Sat, Sep 2, 2023 at 5:54 PM Matthew Knepley <knep...@gmail.com> wrote:
>
>> On Sat, Sep 2, 2023 at 5:45 PM David Knezevic <david.kneze...@akselos.com>
>> wrote:
>>
>>> OK, thanks, I'll look into the custom convergence test.
>>>
>>> I do not understand this comment. What do you mean by "inaccurate"?
>>>> Since we do not have the true solution, we usually say "inaccurate" for
>>>> large residual, but you already said that the residual is small. Why would
>>>> you want to do another iterate?
>>>
>>>
>>> I agree with your comments, but the specific case I'm considering is
>>> very numerically sensitive since it includes creep (which unfortunately has
>>> large exponential terms in it) which is the root cause of the issues I'm
>>> facing. Based on test cases with a known reference solution we're finding
>>> that we get inaccurate results due to steps with "zero iterations". We can
>>> fix this by tightening the tolerance but then we do an excessive number of
>>> iterations in other steps. So it seems to me that ensuring that we do at
>>> least one iteration will help here, so that's what I wanted to try.
>>>
>>
>> Hmm, it sounds like the convergence measure is bad. Maybe using a
>> weighted norm would be better?
>>
>>   Thanks,
>>
>>      Matt
>>
>>
>>> Thanks again for your help.
>>>
>>> Best,
>>> David
>>>
>>> On Sat, Sep 2, 2023 at 3:23 PM Matthew Knepley <knep...@gmail.com>
>>> wrote:
>>>
>>>> On Sat, Sep 2, 2023 at 3:05 PM David Knezevic via petsc-users <
>>>> petsc-users@mcs.anl.gov> wrote:
>>>>
>>>>> Hi all,
>>>>>
>>>>> I'm using the SNES solver for a plasticity model, and the issue I've
>>>>> run into is that in some time steps the solver terminates after "NL step 
>>>>> 0"
>>>>> since the initial residual (based on the solution from the previous time
>>>>> step) is below the specified tolerance.
>>>>>
>>>>> I gather that "NL step 0" only checks the residual and doesn't
>>>>> actually do a Newtown update, and hence it seems that this is leading to
>>>>> inaccurate results in some cases.
>>>>>
>>>>
>>>> I do not understand this comment. What do you mean by "inaccurate"?
>>>> Since we do not have the true solution, we usually say "inaccurate" for
>>>> large residual, but you already said that the residual is small.
>>>> Why would you want to do another iterate?
>>>>
>>>>
>>>>> I can of course specify a smaller convergence tolerance to avoid this
>>>>> issue, but I've found it difficult to find a smaller tolerance that works
>>>>> well in all cases (e.g. it leads to too many iterations or
>>>>> non-convergence). So instead what I would like to do is ensure that the
>>>>> solver does at least 1 Newton iteration instead of terminating at "NL step
>>>>> 0". Is there a way to enforce this behavior, e.g. by skipping "NL step 0",
>>>>> or specifying a "minimum number of iterations"? I didn't see anything like
>>>>> this in the documentation, so I was wondering if there are any suggestions
>>>>> on how to proceed for this.
>>>>>
>>>>
>>>> The easiest way to do this is to write a custom convergence test that
>>>> looks like this
>>>>
>>>> PetscErrorCode SNESConvergedDefault(SNES snes, PetscInt it, PetscReal
>>>> xnorm, PetscReal snorm, PetscReal fnorm, SNESConvergedReason *reason, void
>>>> *dummy)
>>>> {
>>>>   PetscFunctionBeginUser;
>>>>   if (!it) {
>>>>     *reason = SNES_CONVERGED_ITERATING;
>>>>     PetscFunctionReturn(PETSC_SUCCESS);
>>>>   }
>>>>   PetscCall(SNESConvergedDefault(snes, it, xnorm, snorm, fnorm, reason,
>>>> dummy));
>>>>   PetscFunctionReturn(PETSC_SUCCESS);
>>>> }
>>>>
>>>>   Thanks,
>>>>
>>>>      Matt
>>>>
>>>>
>>>>> Thanks,
>>>>> David
>>>>>
>>>>
>>>>
>>>> --
>>>> What most experimenters take for granted before they begin their
>>>> experiments is infinitely more interesting than any results to which their
>>>> experiments lead.
>>>> -- Norbert Wiener
>>>>
>>>> https://www.cse.buffalo.edu/~knepley/
>>>> <http://www.cse.buffalo.edu/~knepley/>
>>>>
>>>
>>
>> --
>> What most experimenters take for granted before they begin their
>> experiments is infinitely more interesting than any results to which their
>> experiments lead.
>> -- Norbert Wiener
>>
>> https://www.cse.buffalo.edu/~knepley/
>> <http://www.cse.buffalo.edu/~knepley/>
>>
>

-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

https://www.cse.buffalo.edu/~knepley/ <http://www.cse.buffalo.edu/~knepley/>

Reply via email to