Hi all,
this is an exciting topic, I hope it's still alive!

A common use case for me is to automatically generate high dimensional 
symbolic ODEs (photonic circuit non-linear coupled mode equations).

One thing I have found is that lambdify does not easily allow for 
efficiently re-using common subexpressions. I have cooked up a fairly dirty 
(though surprisingly useful) little function that identifies common 
subexpressions and then uses nested sympy.lambdify calls (along with 
function closures) to precompute and then re-use common subexpressions. 
You can find it here: https://gist.github.com/ntezak/e1922acdd790e265963e
For my use cases it easily gives my a 2x or 3x speedup despite the 
additional function calling overhead.
I think Mathematica's Compile does similar things under the hood.

I think it would be most desirable if we could have methods to generate 
fast functions that operate in-place on numpy arrays (and perhaps even take 
an additional numpy "work" array to save on allocation inside the function).
Ideally, this would be based on c-function pointers that do not require the 
GIL and can also be passed to other compiled libraries.

Best,

Nik

On Friday, October 30, 2015 at 2:24:06 PM UTC-7, Anthony Scopatz wrote:
>
> Hello All, 
>
> As many of you probably know, earlier this month Aaron joined my research 
> group at the University of South Carolina. He'll be working on adding / 
> improving SymPy's capabilities with respect to being an optimizing 
> compiler. 
>
> There are more details about this vision below, but right now we are in 
> the process of doing a literature review of sorts, and trying to figure out 
> what (SymPy-specific) is out there. What has been done already. Aaron et 
> al, have started putting together a page on the wiki 
> <https://github.com/sympy/sympy/wiki/Code-Generation-Notes> that compiles 
> some of this information. *We'd really appreciate it if you know of 
> anything that is not on this page if you could let us know.*
>
> *We also would be grateful if you could let us know (publicly or 
> privately) about any use cases* that you might have for a symbolic 
> optimizing compiler. There are many examples where different folks have 
> done various pieces of this (chemreac, dengo, pydy, some stuff in pyne), 
> but these examples tend to be domain specific. This effort is supposed to 
> target a general scientific computing audience, and to do that we want to 
> have as many possible scenarios in mind at the outset.  
>
>  And of course, we'd love it if other folks dived in and helped us put 
> this thing together :).
>
> Thanks a million!
> Be Well
> Anthony
>
> Vision
> ------------
> Essentially, what we want to build is an optimizing compiler for symbolic 
> mathematical expressions in order to solve simple equations, ODEs, PDEs, 
> and perhaps more. This compiler should be able to produce very fast code, 
> though the compiler itself may be expensive.
>
> Ultimately, it is easy to imagine a number of backend targets, such as C, 
> Fortran, LLVM IR, Cython, pure Python, etc. It is also easy to imagine a 
> couple of meaningful frontends - SymPy objects (for starters) and LaTeX 
> (which could then be parsed into SymPy). 
>
> We are aiming to have an optimization pipeline that is highly customizable 
> (but with sensible defaults). This would allow folks to tailor the result 
> to their problem or add their own problem-specific optimizations. There are 
> likely different levels to this (such as on an expression vs at full 
> function scope). Some initial elements of this pipeline might include CSE, 
> simple rule-based rewriting (like a/b/c -> a/(b*c) or a*exp(b*x) -> 
> A*2^(B*x)), and replacing non-analytic sub-expressions with approximate 
> expansions (taylor, pade, chebychev, etc) out to an order computed based on 
> floating point precision. 
>
> That said, we aren't the only ones thinking in this area. The chemora (
> http://arxiv.org/pdf/1410.1764.pdf, h/t Matt Turk) code does something 
> like the vision above but using Mathematica, for HPC applications only, and 
> with an astrophysical bent. 
>
> I think a tool like this is important because it allows the exploration of 
> more scientific models more quickly and with a higher degree of 
> verification. The current workflow for most scientific modeling is to come 
> up with a mathematical representation of the problem, a human then 
> translates that into a programming language of choice, they may or may not 
> test this translation, and then execution of that model. This compiler aims 
> to get rid of the time-constrained human in those middle steps.  It won't 
> tell you if the model is right or not, but you'll sure be able to pump out 
> a whole lot more models :).
>
>
> -- 
>
> Asst. Prof. Anthony Scopatz
> Nuclear Engineering Program
> Mechanical Engineering Dept.
> University of South Carolina
> [email protected] <javascript:>
> Office: (803) 777-7629
> Cell: (512) 827-8239
> Check my calendar 
> <https://www.google.com/calendar/embed?src=scopatz%40gmail.com>
>

-- 
You received this message because you are subscribed to the Google Groups 
"sympy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/sympy.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sympy/b8663373-af4a-499e-bca9-e96a6433a6de%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to