Hi, thanks for sharing your results. I did more careful benchmarks using the timeit module's Timer class. The used statement was
for x in (i/1000.0 for i in xrange(1000)): f(x) where f was the corresponding function. Doing timeit(1000) resulted in: compiled: 4.3378310203552246 s Python lambda: 17.872981786727905 s with Psyco: 7.151806116104126 s fast_float: 5.1278481483459473 s Hence is Psyco roughly 2.5 times, fast_float 3.5 times and the compiled one 4 times faster than pure Python lambda. For all but fast_float Python was used, for fast_float Sage. I tried to use lambda with Sage too but it was insanely slow: even timeit(100) took about 60 s! This is more than 100 times slower than with normal Python. It might be possible that fast_float is faster when not used from Sage. Vinzent On 30 Mrz., 20:32, Mike Hansen <[EMAIL PROTECTED]> wrote: > Hello, > > I definitely don't get the same results as you. Here's my Sage > session: > > sage: import math > sage: l = lambda x: -3*x^7 - 2*x^3 + 2*math.exp(x^2) + > x^12*(2*math.exp(2*x) - math.pi*math.sin(x)^((-1) + > math.pi)*math.cos(x)) + 4*(math.pi^5 + x^5 + 5*math.pi*x^4 + > 5*x*math.pi^4 + 10*math.pi^2*x^3 + 10*math.pi^3*x^2)*math.exp(123 + x > - x^5 + 2*x^4) > sage: f = -3*x^7 - 2*x^3 + 2*exp(x^2) + x^12*(2*exp(2*x) - > pi*sin(x)^((-1) + pi)*cos(x)) + 4*(pi^5 + x^5 + 5*pi*x^4 + 5*x*pi^4 + > 10*pi^2*x^3 + 10*pi^3*x^2)*exp(123 + x - x^5 + 2*x^4) > sage: ff = f._fast_float_(x) > sage: l(0.01) > 3.29059638451369e56 > sage: ff(0.01) > 3.2905963845136858e+56 > sage: vals = [ i/float(1000.0) for i in range(1001)] > sage: timeit("for i in vals: l(i)") > 5 loops, best of 3: 280 ms per loop > sage: timeit("for i in vals: ff(i)") > 125 loops, best of 3: 3.47 ms per loop > sage: timeit('ff(1.0r)') > 625 loops, best of 3: 6.03 µs per loop > > The Python lambda is I get is quite a bit slower than yours, and it > looks like fast_float is about 50x faster than lambda. > > --Mike > > On Mar 30, 9:41 am, Vinzent Steinberg > > <[EMAIL PROTECTED]> wrote: > > Thank you very much for the link! > > > I did a benchmark with this pretty function: -3*x**7 - 2*x**3 + > > 2*exp(x**2) + x**12*(2*exp(2*x) - pi*sin(x)**((-1) + pi)*cos(x)) + > > 4*(pi**5 + x**5 + 5*pi*x**4 + 5*x*pi**4 + 10*pi**2*x**3 + > > 10*pi**3*x**2)*exp(123 + x - x**5 + 2*x**4) for x = (0,1,2,...,1000)/ > > 1000. > > > The results: > > > sage fast_float: 0.793402 s > > python lambda: 0.032177 s > > compiled: 0.016872 s > > > I measured only function evaluation time (within python loop). I did > > use sage interactively. Maybe I did something wrong, because I can't > > explain why sage is even slower than pure python. Anyway, I did use > > the time module and precision on Unix is afaik only about 1/100 s, so > > this not a good benchmark. > > > Can anyone confirm these numbers? > > > Regards, > > Vinzent > > > On Mar 29, 10:15 am, Mike Hansen <[EMAIL PROTECTED]> wrote: > > > > Hi Vinzent, > > > > Robert Bradshaw wrote some really nice fast function evaluation code > > > for Sage using Cython. You can check it out > > > here:http://www.sagemath.org/hg/sage-main/file/211b127eab5d/sage/ext/fast_... > > > > --Mike > > > > On Mar 28, 8:13 am, Vinzent Steinberg > > > > <[EMAIL PROTECTED]> wrote: > > > > I wrote some code using Cython (http://cython.org/) and gcc to compile > > > > SymPy functions to machine code which is directly accessible using a > > > > Python Function. > > > > See this ticket:http://code.google.com/p/sympy/issues/detail?id=765 > > > > > It works fine so far, I got it nearly running even on Windows using > > > > MinGW, there were only some naming issues during linking, which are > > > > probably easy to solve. > > > > > Speedups compared to f.subs(x, ...).evalf() are epic (it's around 200 > > > > times faster). > > > > > Currently it uses the standard C math library, it could be extended to > > > > use other libraries (for examplehttp://gmplib.org/) with support for > > > > arbitrary precision, complex numbers, even more speed etc. So far the > > > > only dependencies are Python (which includes the necessary headers), > > > > Cython (written in pure Python) and a C compiler (which accepts gcc > > > > command line syntax). At the moment the paths must be specified, but > > > > this could be changed to work platform-independent and out of the box. > > > > > Native NumPy support might be wise to avoid Python's function > > > > overhead. > > > > > It could be used by evalf in some way, compiling takes the first time > > > > about 1 s, afterwards less (of course a function is only compiled > > > > once, the temporary files are reused). Probably this could be > > > > optimized. So it should eventually be added to the Function object. > > > > And maybe there should be a global compile function. > > > > > Fast function evaluation in general is interesting for plotting and > > > > numerical algorithms. Dynamic plotting of a function when zooming out > > > > (variable interval depending on current view) would be interesting in > > > > my opinion. Are there other use cases? Algebraic algorithms where > > > > speed matters? > > > > > Regards, > > > > Vinzent > > --~--~---------~--~----~------------~-------~--~----~ You received this message because you are subscribed to the Google Groups "sympy" group. To post to this group, send email to sympy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sympy?hl=en -~----------~----~----~----~------~----~------~--~---