Hey,

thanks for elaborating (maybe you should copy your message into some 
> vision.md file in your repo).  Your plan sounds cool, good luck with it! 
> I would definitely use a plug and play ODE/PDE solver infrastructure. 


  I will write a cleaner version as a blog post with some examples to walk 
through the whole idea once I have some more "polish" done.
 

>  
>
I agree that at the fast pace you're moving at the moment it is best to 
> just have your own package(s).  However, if you code up more ODE/DAE 
> solvers, it would be great if those are usable without all the machinery 
> of DifferentialEquations.jl and if they eventually make into ODE.jl (or 
> some other "standard" ODE package). 
>
>
They definitely could. In fact, you can port some of the RK methods just by 
implementing some of the tableaus from 
here: 
https://github.com/ChrisRackauckas/DifferentialEquations.jl/blob/master/src/ode/ODECoefficientTypes.jl.
 
I just checked and am surprised you guys don't have a DP78 method, but it 
shouldn't take more than minutes to plop that tableau over there.
 

> A few more comments in-line below.  Cheers! Mauro 
>
> On Wed, 2016-06-08 at 20:40, Chris Rackauckas <rack...@gmail.com 
> <javascript:>> wrote: 
> > Thanks for chiming in! There's a lot to say about this topic, and I 
> think it's 
> > great! I felt like Julia's differential equation solvers were behind 
> where they 
> > should be, which was crazy to me since there is so many state-of-the-art 
> > mathematical packages in other areas (optimization, stats, etc.) 
>
> I think we are missing someone who actually does research in this area. 
> As far as I can tell, all of us ODE.jl-ers are just "users", but not 
> researching new methods. 
>
> I think one of the key features here is that whilst JuMP is all fancy, 
> its components are pretty down to earth.  If you just want to optimize a 
> function without using JuMP's DSL then just use, e.g., NLopt.jl. 
> Maybe something to keep in mind when designing DifferentialEquations. 
>

I have thought about keeping the solvers as functions which could just be 
called "naturally" without making a type, but the main issue is that 
plugging into some function by passing all of the parts is nice for only 
for the simplest types of problems (ODEs, Poisson equation, etc., but even 
this falls apart if you start talking about Taylor methods etc.). I 
actually started out with the FEM solvers having a solver with a huge 
definition, and then having the type be a dispatch which then plugs a bunch 
of things in. However, it started to become hard to maintain like that. For 
one, I didn't know you could splat kwargs (ahh! That was horrifying to pass 
every kwarg!), but then I didn't have a good way to make the kwargs of the 
typed dispatch match the kwargs of the solver (since the kwargs were 
defined for two different functions), and any time there was a change I'd 
have to "propagate that change through all the solvers". I finally had 
enough of that and found your Parameters.jl package. Now you see that the 
solvers just use a type and unpack at the top. Very simple and easy to 
maintain. But if there's a better way of handling that issue with the 
dispatches, then I can put them back to how they were and pull them out to 
be separate solvers which could be called directly.Then if you want to pull 
out some of the new solving algorithms (I need better words for 
differentiating here: the solver is the method which acts on a type, while 
the solving algorithm is RK or Euler-Maruyama), that's pretty much what I'm 
doing with @def except not making them directly callable due to 
kwarg/default settings problems. It's just easier to have all of the 
pre/post processing in one way, but that could change by some smarter 
engineering.
 

> > Once I had that structure, I wanted it all to be similar so that way 
> they could 
> > interact well. Even though they do currently use it, there will be 
> (S)PDE 
> > solvers which allow for using method of lines discretizations which then 
> just 
> > plug into (S)ODE solvers. And what about all of those C/Fortran 
> > solvers? 
>
> (Do you know https://github.com/luchr/ODEInterface.jl and TS of 
> https://github.com/JuliaParallel/PETSc.jl ?) 
>


Yes, those and Sundials are what I have in mind as methods the user can 
choose. I don't really know how to handle the dependency issue though: do 
you require these packages? Right now I require Plots and PyPlot. Is that 
necessary? Is requiring NLSolve necessary when it's only used for the 
implicit methods? ForwardDiff is only used in the Rosenbrock methods 
(currently), should it be a dependency? Or should I detail a list of what 
methods need what dependencies? I haven't settled on this, and may open an 
issue.
 

> If you have special API requirements, then it would still be good to 
> hear them. 
>

I don't think I'll have requirements, but I'll have to do a matrix like 
Plots.jl does where it's like "these solvers do ...". I think using 
in-place updating (I haven't made a version for this yet though), being 
able to choose how often results are saved, and being able to switch on and 
off adaptivity will be on the list.

Will the whole infrastructure for SDE solvers be usable for ODE/DAE 
> solvers too?  With infrastructure I mean: adaptive step size and order 
> selection, dense output, root finding/events, etc. 
>
> Yes and no. Yes in that you could make an SDE with no noise and then use 
the SDE solvers (for science reasons). No in that they are completely 
different methods and you would never want to do that in any practical 
case. I am still on the fence about whether there should just be an 
"OdeProblem" type that does this based on whether there is noise, but the 
issue here is that the default algorithms, tolerances, etc. are all 
different between them. Adaptivity (without requiring a priori estimates) 
is what I worked out in the last paper, and even though the final algorithm 
has pretty low overhead over ODEs, it's still non-zero and measurably there 
(and 100x more complex to get there, mucking up the algorithm). Order 
selection is very different too: for SDEs there are two relevant concepts 
of order and generally the strong order goes in increments of 0.5, with the 
extra number of calculations for an order change and the stability issues 
being completely different (with different orders for differing amounts of 
noise dominance as well). Dense output is a mystery waiting to be solved: I 
have some ideas (with issues) but whatever the answer is, it's likely to be 
complicated. I think that makes it clear that at some fundamental level 
they end up being very different, making it hard to apply any of the same 
methods.

The weird thing is that this is not true for SPDEs. This is because the 
Davie-Gaines bound basically means you should use a Strong Order 0.25 
method (Euler-Maruyama), and then your best approximation tends to the the 
simplest/easiest/naturalist one, and so putting PDE/SPDEs together is easy. 
I am deep into an idea for changing that, but to try out the idea will 
require a bit more infrastructure (it's quite an involved algorithm). This 
will be back to my main focus likely next fall for a month or so, and so if 
this idea finally works I may put more separation between PDE and SPDE 
methods for the same reason as ODEs/SDEs.

Reply via email to