On Monday, 1 September 2014 22:37:31 UTC+2, James Crist wrote:
>
> /.../ Handling compiler options should definitely be added to the
> codewrappers though, but should be a separate issue.
>
Yes, I even went so far to write my own package to handle compilaton
(github.com/bjodah/pycompilation). Lo
On Wed, Sep 3, 2014 at 1:21 PM, James Crist wrote:
> Can you point me to some examples from your experience when
>> -ffast-math may result in loss of precision? I use it all the time in
>> all my production codes. So I would like to learn more about the
>> pitfalls.
>
>
> Personally, I've never ha
>
> I agree that it's a good idea. I guess a single header which is included
> is easiest to start with?
> Should the functions be implemented in C? or C and Fortran? Or C++ even?
> Are there any other functions except pow that needs special implementation?
>
As we're generating C compatible cod
Hi James,
On Mon, Sep 1, 2014 at 2:37 PM, James Crist wrote:
>>
>> I prefer either to pass -ffast-math flag (setting compiler flags is already
>> an issue since we need
>> to indicate optimization level, right?) or write a specialized callback to
>> be inlined (tuned to an Intel i7):
>
>
> I'm
> That being said, we do that in Theano. Putting a limit on the exponent
> that you do that would allow to don't loose too much precission for big
> exponent.
>
Yes, that sounds like a good idea. Did you make any investigation on what
is a suitable limit?
> I do like the specialized func
Using explicit multiplicatin (even with the parenthesis) lower the
precission of the result compared to pow.
That being said, we do that in Theano. Putting a limit on the exponent that
you do that would allow to don't loose too much precission for big exponent.
Fred
On Fri, Aug 29, 2014 at 4:48
>
> I prefer either to pass -ffast-math flag (setting compiler flags is
> already an issue since we need
> to indicate optimization level, right?) or write a specialized callback to
> be inlined (tuned to an Intel i7):
>
I'm not a fan of using the -ffast-math flag. It does other things that may
re
Cool! Great work!
On Friday, 29 August 2014 22:48:23 UTC+2, James Crist wrote:
>
> If I understand correctly, there is no cost in representing pow(x, n) as
> x*x*x*x... for any positive integer n, as long as it's done correctly.
>
Compilation will be slower for large n.
> C compilers don't l
Try applying it to the expression tree. Horner's rule is supposed to be
optimal for evaluating polynomials.
On 29 Aug 2014, at 17:04, James Crist wrote:
For handling Pow? horner(x**11) results in x**11. Or were you
recommending
applying horner to an entire expression tree?
On Fri, Aug 29, 2
For handling Pow? horner(x**11) results in x**11. Or were you recommending
applying horner to an entire expression tree?
On Fri, Aug 29, 2014 at 3:55 PM, Tim Lahey wrote:
> I recommend that you use the horner function in polys.
>
>
> On 29 Aug 2014, at 16:48, James Crist wrote:
>
> If I unders
I recommend that you use the horner function in polys.
On 29 Aug 2014, at 16:48, James Crist wrote:
If I understand correctly, there is no cost in representing pow(x, n)
as
x*x*x*x... for any positive integer n, as long as it's done correctly.
C
compilers don't like to change how you write ou
If I understand correctly, there is no cost in representing pow(x, n) as
x*x*x*x... for any positive integer n, as long as it's done correctly. C
compilers don't like to change how you write out calculations unless
they're asked too. So x*x*x*x will not generate the same machine code as
(x*x)*(x*x)
I think we should print pow using repeated multiplication. People
might not know about --ffast-math, not realize that we are using pow
and that it is needed, or not want other optimizations that it
provides.
Is there a reason to put a limit on the power (5 was suggested here,
10 on the pull reques
Sorry, it wasn't merged. He found that the --fast-math flag in the complier
takes care of this.
Jason
moorepants.info
+01 530-601-9791
On Fri, Aug 29, 2014 at 10:37 AM, Jason Moore wrote:
> Here is some work on the pow issue:
> https://github.com/sympy/sympy/pull/7519
>
> Looks like it was me
Here is some work on the pow issue: https://github.com/sympy/sympy/pull/7519
Looks like it was merged so the ccode printer should print x*x*x... for
less that 10 x's.
Jason
moorepants.info
+01 530-601-9791
On Fri, Aug 29, 2014 at 7:33 AM, Jason Moore wrote:
>
>
>
> Jason
> moorepants.info
>
Jason
moorepants.info
+01 530-601-9791
On Fri, Aug 29, 2014 at 2:38 AM, James Crist wrote:
> I was planning on going to bed, but ended up working on this instead. I
> have no self control...
>
> Anyway, I've uncovered some things:
>
> 1. Addition of the restrict keyword to tell the compiler we'
I was planning on going to bed, but ended up working on this instead. I
have no self control...
Anyway, I've uncovered some things:
1. Addition of the restrict keyword to tell the compiler we're not aliasing
offers marginal gains. Gain a couple microseconds here and there. This
requires a c99
On why Fortran is faster, Fortran semantics ensure that function arguments
never alias, this allows the optimizer to make assumptions about the function
and the arguments. This the main advantage of Fortran over C. But, because of
this, it can lead to more memory usage. I know that the newer C++
Jim and others,
Here are the benchmarks I made yesterday:
http://www.moorepants.info/blog/fast-matrix-eval.html
The working code is here:
https://gist.github.com/moorepants/6ef8ab450252789a1411
Any feedback is welcome.
Jason
moorepants.info
+01 530-601-9791
On Wed, Aug 27, 2014 at 11:44 PM,
I was wondering about that. I wasn't sure if the overhead from looping
through the inputs multiple times would outweigh improvements from fast C
loops. Glad that in your case it does.
I've thrown a WIP PR up: https://github.com/sympy/sympy/pull/7929
For some reason, creating the functions in pyth
Yeh, but if you simply create a ufunc for each expression in a matrix you
still get substantial speedups. I wrote a bunch of test cases that I'll
post to my blog tomorrow.
Jason
moorepants.info
+01 530-601-9791
On Wed, Aug 27, 2014 at 11:26 PM, James Crist wrote:
> Not yet. I wrote it this mo
Not yet. I wrote it this morning during an extremely boring meeting, and
haven't had a chance to clean it up. This doesn't solve your problem about
broadcasting a matrix calculation though...
On Wed, Aug 27, 2014 at 10:23 PM, Jason Moore wrote:
> Awesome. I was working on this today but it look
Awesome. I was working on this today but it looks like you've by passed
what I had working. Do you have a PR with this?
Jason
moorepants.info
+01 530-601-9791
On Wed, Aug 27, 2014 at 11:11 PM, Matthew Rocklin
wrote:
> Cool
>
>
> On Wed, Aug 27, 2014 at 8:07 PM, James Crist wrote:
>
>> I stil
Cool
On Wed, Aug 27, 2014 at 8:07 PM, James Crist wrote:
> I still need to do some cleanups and add tests, but I finally have this
> working and thought I'd share. I'm really happy with this:
>
> In [1]: from sympy import *
>
> In [2]: a, b, c = symbols('a, b, c')
>
> In [3]: expr = (sin(a) + s
I still need to do some cleanups and add tests, but I finally have this
working and thought I'd share. I'm really happy with this:
In [1]: from sympy import *
In [2]: a, b, c = symbols('a, b, c')
In [3]: expr = (sin(a) + sqrt(b)*c**2)/2
In [4]: from sympy.utilities.autowrap import ufuncify
I
25 matches
Mail list logo