On Wednesday, 9 March 2016 14:57:14 UTC+1, Tanu Hari Dixit wrote:
>
>
> 2)The optimizer can implement the following with individual switches:
>
>
> i)unrolled pow(x, n)
>
> ii)fused add multiply for floating point calculations
>
> iii)intelligent guess about whether to use exp2
>

i, ii) any modern compiler makes this moot, it's simply a matter of 
supplying the correct flags (whether SymPy should know about these flags 
for respective compiler or not I don't know).
iii) I know Anthony Scopatz does this, but on the hardware and compilers 
I've tested this on I've seen everything from ~15% faster to 50% slower 
execution. Again, we would have to teach SymPy about combinations of math 
library versions, compiler versions and hardware.
 

> iv)horner
>
> v)horner with fused add multiply, maybe?
>
> vi)fast matrix exponentiation that takes in an AST 
>
> vii)trignometric simplification
>
> viii)pre-computing constant expressions
>
> ix)using cse with 'basic' optimization.
>
> x)splitting very large expressions into number of Augmented Assignments
>
> xi)fractional power optimization (a=x**(1/2), b=x**(3/2) => a=x**(1/2), 
> b=x*a)
>
> xii)integer power optimization (a=x**8, b=x**10 => t1=x**2, t2=t1**2, 
> a=t2**2, b=a*t1
>

I wonder if compilers do this already, I have not tried myself.

xiii)Sub-product optimization (a=xyz, b=wyz => t1=yz, a=xt1, b=wt1)
>
> xiv)Sum multiple optimization (a =x-y, b=2y-2x => a=x-y, b=-2a)
>
>
> The last four are taken from 
> http://www.jnaiam.org/new/uploads/files/16985fffb53018456cf3506db1c5e42b.pdf 
> <http://www.google.com/url?q=http%3A%2F%2Fwww.jnaiam.org%2Fnew%2Fuploads%2Ffiles%2F16985fffb53018456cf3506db1c5e42b.pdf&sa=D&sntz=1&usg=AFQjCNGVWjH-_xSZm7FrCwPL9qBOAubkUQ>
>  
> which is a paper by Allan Wittkopf on code generation in Maple.
>

I would be inclined to think that a lot has happened to the optimizers in 
compilers since 2007, one would need to replicate some of the experiments 
on modern hardware/compilers.
 

>
> I understand that these optimizations need to be tested and might not 
> necessarily provide a speed up in all contexts.
>
> I would like to ask if you have any comments on how you think the 
> optimization pipeline should be (improvements on the naive model I drew on 
> Pinta) and according to you, which of the above optimizations are worth 
> being included ( after being tested ). 
>

As you say, testing this will be crucial. SymPy would need access to some 
dedicated hardware and I suppose a benchmark suite for SymPy generated code 
would be almost a must?
 

>
> Thank you,
>
> Tanu Hari Dixit.
>

-- 
You received this message because you are subscribed to the Google Groups 
"sympy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sympy+unsubscr...@googlegroups.com.
To post to this group, send email to sympy@googlegroups.com.
Visit this group at https://groups.google.com/group/sympy.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sympy/979cd70b-82cd-4638-a1c2-7f55a87df452%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to