On Tue, Oct 17, 2017 at 8:51 AM, Jed Brown <j...@jedbrown.org> wrote:
> Matthew Knepley <knep...@gmail.com> writes: > > >> It's a recipe for confusion. Either the parameters are never passed > >> explicitly or they are always passed explicitly and should not be stored > >> redundantly in the context, thus perhaps enabling some sort of higher > >> level analysis that dynamically changes parameter values. I would go > >> with the former for now. > > > > > > I want to say again how much I dislike ad hoc memory layouts through > > contexts and the like. We have a perfectly good layout descriptor (DM) > > that should be used to describe data layout. > > This is an independent change from the adjoint work and I think it's out > of scope right now. If we change it, it should go in its own PR. I > don't like having one PR with a smattering of non-essential changes to > old interfaces bundled together with new conventions and new > functionality. > My understanding was that this discussion is not about a particular PR, but about the interface we should have for sensitivity and optimal control. > Putting the parameters in a vector would enable finite differencing of > the RHSFunction to obtain its dependence on parameters. That might have > high (non-scalable in number of parameters) cost, but would be less > expensive than finite differencing an entire model. Consider the > scenario of 100 parameters and 1e6 state variables (at each time step). > If we have the ability to apply the transpose of the Jacobian with > respect to model state, we could run an adjoint simulation and only need > 100 RHSFunction evaluations per stage, rather than 100 solves. > I am not sure of the point of the above paragraph. Saying the point rather than implying the point helps me. Matt -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ <http://www.caam.rice.edu/~mk51/>