On Sun, Oct 6, 2013 at 9:31 PM, Aaron Meurer <asmeu...@gmail.com> wrote: > You can get similar speeds in regular SymPy. For me > > In [59]: R, x, y, z, w = ring("x,y,z,w", ZZ, lex) > > In [60]: %timeit (x + y + z + w)**60 > 1 loops, best of 3: 414 ms per loop > > in pycsympy the fastest time was 405 ms.
But be careful that your benchmark is just one particular polynomial algorithm, not the general symbolic engine. In particular, you can't use non-integer exponents: In [7]: f = (x + y + z**x + w)**60 --------------------------------------------------------------------------- TypeError Traceback (most recent call last) /auto/nest/nest/u/ondrej/repos/sympy/<ipython-input-7-760bfb02ea88> in <module>() ----> 1 f = (x + y + z**x + w)**60 /auto/nest/nest/u/ondrej/repos/sympy/sympy/polys/rings.py in __pow__(self, n) 1099 """ 1100 ring = self.ring -> 1101 n = int(n) 1102 1103 if n < 0: TypeError: int() argument must be a string or a number, not 'PolyElement' So we need to benchmark both. For example, for the benchmark that you don't like below, here are the two times with csympy: $ ./expand2 Expanding: ((y + x + z + w)^15 + w)*(y + x + z + w)^15 1351ms number of terms: 6272 and $ ./expand2b poly_mul start poly_mul stop 153ms number of terms: 6272 So in this particular case, using sparse polynomial representation gets 8.8x faster. But so far I haven't seen a computer algebra system, that would use this faster representation by default. > I wasn't able to compile CSymPy because of > > /Users/aaronmeurer/Documents/python/sympy/csympy/src/basic.h:12:10: > fatal error: 'unordered_map' file not found See here: https://github.com/certik/csympy/issues/79 You need a C++11 compiler to compile CSymPy. > > Anyway, you've shown that the speed difference from SymPy is primarily > the algorithm. I think to really see if C++ or Python or PyPy make a > further difference, you need an example that takes longer than a > second to compute. More importantly, per my email, we need to benchmark the same algorithms. > > I would also try to make the benchmarks a little less trivial. (x + y > + z)**60 is hardly the typical multinomial. How about some polynomials > with coefficients that aren't 1? How about some rational coefficients? > How about some expansions that have heavy cancellation (like expand > the factorization of x**n - 1 for large n)? Finally, for multinomial > expansion, there are two important numbers, the exponent, and the > number of terms in the base. We have made the exponent big, but I > haven't seem much of making the base big (and do it both with a bunch > of different variables and with powers of the same variable; in the > form case, terms don't combine and in the latter they do). On Sun, Oct 6, 2013 at 11:52 PM, Aaron Meurer <asmeu...@gmail.com> wrote: >> On Oct 6, 2013, at 11:18 PM, "Ondřej Čertík" <ondrej.cer...@gmail.com> wrote: [...] >> Besides that, one has to keep in mind that these benchmarks are highly >> synthetic. But it's better than nothing, so I used them too in csympy. >> They do seem to provide quite some information about the speed. I >> personally like the expand2 benchmark in csympy: >> >> (w + (w + z + y + x)^15) * (w + z + y + x)^15 > > I don't like this one. See my suggestions in my previous email. I thought this benchmark fits into "making the base big with a bunch of different variables and with powers of the same variable", per your email. > > Other things you should vary would be the number of variables and the > degree of the base (both of these would be penalized by the dense > representation used by Poly, for instance). Ondrej -- You received this message because you are subscribed to the Google Groups "sympy" group. To unsubscribe from this group and stop receiving emails from it, send an email to sympy+unsubscr...@googlegroups.com. To post to this group, send email to sympy@googlegroups.com. Visit this group at http://groups.google.com/group/sympy. For more options, visit https://groups.google.com/groups/opt_out.