Yes, CSE is an important optimization. Another useful optimization for
your case would be a function that automatically splits up larger
expressions (like splitting a large sum into many += assignments).

Aaron Meurer

On Fri, Feb 26, 2016 at 7:57 PM, Liang <liang...@gmail.com> wrote:
> This is a great project.
>
> Is there a plan to incorporate CSE in this tool? For my application, the
> equation is extremely long, typically containing several thousands of terms.
> The Fortran codes generated by codegen exceed the continuation line limit of
> Intel Fortran compiler and have to be manually edited. Many of the terms on
> the right-hand side can be combined together through CSE and the post-CSE
> formula can be easily fitted under the continuation line limit.
>
> On Tuesday, December 1, 2015 at 12:17:41 PM UTC-8, Aaron Meurer wrote:
>>
>> On Tue, Dec 1, 2015 at 12:30 PM, nikolas <nikola...@gmail.com> wrote:
>> > Hi all,
>> > this is an exciting topic, I hope it's still alive!
>>
>> Definitely is. I'm actively working on this project.
>>
>> >
>> > A common use case for me is to automatically generate high dimensional
>> > symbolic ODEs (photonic circuit non-linear coupled mode equations).
>> >
>> > One thing I have found is that lambdify does not easily allow for
>> > efficiently re-using common subexpressions. I have cooked up a fairly
>> > dirty
>> > (though surprisingly useful) little function that identifies common
>> > subexpressions and then uses nested sympy.lambdify calls (along with
>> > function closures) to precompute and then re-use common subexpressions.
>> > You can find it here:
>> > https://gist.github.com/ntezak/e1922acdd790e265963e
>> > For my use cases it easily gives my a 2x or 3x speedup despite the
>> > additional function calling overhead.
>> > I think Mathematica's Compile does similar things under the hood.
>>
>> That's a good point. Lambdify does one thing quite well, but I think
>> most people are confused as to what exactly it does and what it should
>> and shouldn't do. That and it sort of sits apart from the rest of the
>> code generation in terms of architecture (and indeed it takes a bit of
>> insight to realize that lambdify is in fact just another type of code
>> generation + a "compilation" stage, and nothing more).
>>
>> Instead of going for a functional approach to solve this, with lots of
>> lambda calculus and currying and whatnot (which in my opinion gets
>> confusing really fast), I think it would be better to extend lambdify
>> to print not lambda functions but just regular Python functions. Then
>> something like lambdify(x, sin(x) + cos(x) + (sin(x) + cos(x))**2,
>> cse=True) could produce
>>
>> def _func(x):
>>     _y = sin(x) + cos(x)
>>     return _y + y**2
>>
>> (by the way, it's generally a good idea to use cse() to take care of
>> common subexpressions, as it's going to be a little bit smarter than
>> your walk_exprs).
>>
>> Something that I've been playing with a little bit is a CodeBlock
>> object, which would make it easier for the code printers to work with
>> and reason about blocks of code, rather than just single expressions
>> and assignments. That way you won't have to move up an abstraction
>> layer just to start thinking about common subexpression elimination.
>>
>> >
>> > I think it would be most desirable if we could have methods to generate
>> > fast
>> > functions that operate in-place on numpy arrays (and perhaps even take
>> > an
>> > additional numpy "work" array to save on allocation inside the
>> > function).
>> > Ideally, this would be based on c-function pointers that do not require
>> > the
>> > GIL and can also be passed to other compiled libraries.
>>
>> It's all a question of how to represent this in SymPy syntax. We have
>> Indexed and MatrixSlice, which are generally used to represent
>> in-place operations for code generation.  I suppose to support what
>> you want we would need a more general NumPySlice object.  Or would
>> Indexed satisfy your needs.
>>
>> Once you have that, it's just a question of generating code for it.
>> SymPy already has a few ways of generating code to work on NumPy
>> arrays, such as autowrap, ufuncify, and lambdify (and you can use
>> numba.jit or numpy.vectorize on a lambdified function to make it
>> faster or even numba.jit(nogil=True) to make it run without the GIL).
>>
>> Aaron Meurer
>>
>> >
>> > Best,
>> >
>> > Nik
>> >
>> > On Friday, October 30, 2015 at 2:24:06 PM UTC-7, Anthony Scopatz wrote:
>> >>
>> >> Hello All,
>> >>
>> >> As many of you probably know, earlier this month Aaron joined my
>> >> research
>> >> group at the University of South Carolina. He'll be working on adding /
>> >> improving SymPy's capabilities with respect to being an optimizing
>> >> compiler.
>> >>
>> >> There are more details about this vision below, but right now we are in
>> >> the process of doing a literature review of sorts, and trying to figure
>> >> out
>> >> what (SymPy-specific) is out there. What has been done already. Aaron
>> >> et al,
>> >> have started putting together a page on the wiki that compiles some of
>> >> this
>> >> information. We'd really appreciate it if you know of anything that is
>> >> not
>> >> on this page if you could let us know.
>> >>
>> >> We also would be grateful if you could let us know (publicly or
>> >> privately)
>> >> about any use cases that you might have for a symbolic optimizing
>> >> compiler.
>> >> There are many examples where different folks have done various pieces
>> >> of
>> >> this (chemreac, dengo, pydy, some stuff in pyne), but these examples
>> >> tend to
>> >> be domain specific. This effort is supposed to target a general
>> >> scientific
>> >> computing audience, and to do that we want to have as many possible
>> >> scenarios in mind at the outset.
>> >>
>> >>  And of course, we'd love it if other folks dived in and helped us put
>> >> this thing together :).
>> >>
>> >> Thanks a million!
>> >> Be Well
>> >> Anthony
>> >>
>> >> Vision
>> >> ------------
>> >> Essentially, what we want to build is an optimizing compiler for
>> >> symbolic
>> >> mathematical expressions in order to solve simple equations, ODEs,
>> >> PDEs, and
>> >> perhaps more. This compiler should be able to produce very fast code,
>> >> though
>> >> the compiler itself may be expensive.
>> >>
>> >> Ultimately, it is easy to imagine a number of backend targets, such as
>> >> C,
>> >> Fortran, LLVM IR, Cython, pure Python, etc. It is also easy to imagine
>> >> a
>> >> couple of meaningful frontends - SymPy objects (for starters) and LaTeX
>> >> (which could then be parsed into SymPy).
>> >>
>> >> We are aiming to have an optimization pipeline that is highly
>> >> customizable
>> >> (but with sensible defaults). This would allow folks to tailor the
>> >> result to
>> >> their problem or add their own problem-specific optimizations. There
>> >> are
>> >> likely different levels to this (such as on an expression vs at full
>> >> function scope). Some initial elements of this pipeline might include
>> >> CSE,
>> >> simple rule-based rewriting (like a/b/c -> a/(b*c) or a*exp(b*x) ->
>> >> A*2^(B*x)), and replacing non-analytic sub-expressions with approximate
>> >> expansions (taylor, pade, chebychev, etc) out to an order computed
>> >> based on
>> >> floating point precision.
>> >>
>> >> That said, we aren't the only ones thinking in this area. The chemora
>> >> (http://arxiv.org/pdf/1410.1764.pdf, h/t Matt Turk) code does something
>> >> like
>> >> the vision above but using Mathematica, for HPC applications only, and
>> >> with
>> >> an astrophysical bent.
>> >>
>> >> I think a tool like this is important because it allows the exploration
>> >> of
>> >> more scientific models more quickly and with a higher degree of
>> >> verification. The current workflow for most scientific modeling is to
>> >> come
>> >> up with a mathematical representation of the problem, a human then
>> >> translates that into a programming language of choice, they may or may
>> >> not
>> >> test this translation, and then execution of that model. This compiler
>> >> aims
>> >> to get rid of the time-constrained human in those middle steps.  It
>> >> won't
>> >> tell you if the model is right or not, but you'll sure be able to pump
>> >> out a
>> >> whole lot more models :).
>> >>
>> >>
>> >> --
>> >>
>> >> Asst. Prof. Anthony Scopatz
>> >> Nuclear Engineering Program
>> >> Mechanical Engineering Dept.
>> >> University of South Carolina
>> >> sco...@cec.sc.edu
>> >> Office: (803) 777-7629
>> >> Cell: (512) 827-8239
>> >> Check my calendar
>> >
>> > --
>> > You received this message because you are subscribed to the Google
>> > Groups
>> > "sympy" group.
>> > To unsubscribe from this group and stop receiving emails from it, send
>> > an
>> > email to sympy+un...@googlegroups.com.
>> > To post to this group, send email to sy...@googlegroups.com.
>> > Visit this group at http://groups.google.com/group/sympy.
>> > To view this discussion on the web visit
>> >
>> > https://groups.google.com/d/msgid/sympy/b8663373-af4a-499e-bca9-e96a6433a6de%40googlegroups.com.
>> >
>> > For more options, visit https://groups.google.com/d/optout.
>
> --
> You received this message because you are subscribed to the Google Groups
> "sympy" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to sympy+unsubscr...@googlegroups.com.
> To post to this group, send email to sympy@googlegroups.com.
> Visit this group at https://groups.google.com/group/sympy.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/sympy/9198f1ea-b0e8-4f54-b559-8d2e6cedae41%40googlegroups.com.
>
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"sympy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sympy+unsubscr...@googlegroups.com.
To post to this group, send email to sympy@googlegroups.com.
Visit this group at https://groups.google.com/group/sympy.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sympy/CAKgW%3D6LcE6rt8BNZLC1_epTXY_Eg%3D5Zuq7FV0%2BON0bn09OSH%3DA%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to