Sorry to be pushy, but could anyone help me on this?

Thanks
Miguel

On Thu, Oct 23, 2014 at 10:46 AM, Miguel Angel Salazar de Troya <
salazardetr...@gmail.com> wrote:

> I decided to implement the continuous adjoint because it is clearer to me
> how to do it and the gradient will converge anyways. I was trying to
> interpolate the solution, but in the adjoint analysis I need to do it once
> the simulation has finished. I have saved all the solutions for each time
> step. My idea was to reset the TS at the time step immediately previous to
> the time step in which I want to interpolate to:
>
> interpolate_timestep : where I want to interpolate to
> previous_timestep : last time step in our time step history which is
> smaller that the interpolate_timestep
> Current_Sol : Solution at the time step previous_timestep
>
> TSSetTime(ts,previous_timestep);
> TSSetSolution(ts,Current_Sol);
> TSSetRetainStages(ts,PETSC_TRUE);
>
> TSInterpolate(ts,interpolate_timestep,X);
>
> Vector X should have a close value to Current_Sol, but it's nowhere close.
> I also compare it to the solution of the next time step after
> previous_timestep (therefore interpolate_timestep is between these guys)
> and it's not close either. I've read that TSInterpolate() has to be
> extended to support continuous adjoints.
>
> Thanks
> Miguel
>
> On Tue, Oct 21, 2014 at 6:04 PM, Miguel Angel Salazar de Troya <
> salazardetr...@gmail.com> wrote:
>
>> That might be a reasonable argument, but I'm not sure. This is one of the
>> papers that explains it. I'm re-reading to see if I skipped any details:
>>
>> http://www.sciencedirect.com/science/article/pii/S0377042709006062
>>
>> Miguel
>>
>> On Mon, Oct 20, 2014 at 11:42 PM, Jed Brown <j...@jedbrown.org> wrote:
>>
>>> Miguel Angel Salazar de Troya <salazardetr...@gmail.com> writes:
>>>
>>> > Thanks for your response
>>> >
>>> > I'm struggling with this problem because the literature is not clear
>>> for me
>>> > on how to calculate the discrete adjoint with adaptive time stepping
>>> > algorithms. They cover the details when automatic differentiation
>>> tools are
>>> > used. They mention that because the time step depend on the solution,
>>> it
>>> > also depends on the parameter. Hence, there are terms that represent
>>> the
>>> > derivative of the time step w.r.t. the parameters. What it is
>>> confusing is
>>> > that they mention these terms must be removed. I don't understand
>>> this. I'm
>>> > planning to hard-code the discrete adjoint problem (and use the TS if
>>> > possible).
>>>
>>> Are they suggesting that the time step sizes for a given run should be
>>> frozen (at least locally) so that you have consistent gradients for a
>>> while?
>>>
>>
>>
>>
>> --
>> *Miguel Angel Salazar de Troya*
>> Graduate Research Assistant
>> Department of Mechanical Science and Engineering
>> University of Illinois at Urbana-Champaign
>> (217) 550-2360
>> salaz...@illinois.edu
>>
>>
>
>
> --
> *Miguel Angel Salazar de Troya*
> Graduate Research Assistant
> Department of Mechanical Science and Engineering
> University of Illinois at Urbana-Champaign
> (217) 550-2360
> salaz...@illinois.edu
>
>


-- 
*Miguel Angel Salazar de Troya*
Graduate Research Assistant
Department of Mechanical Science and Engineering
University of Illinois at Urbana-Champaign
(217) 550-2360
salaz...@illinois.edu

Reply via email to