On Jan 3, 10:19 pm, "Ondrej Certik" <[EMAIL PROTECTED]> wrote: > On Jan 3, 2008 10:51 PM, Pearu Peterson <[EMAIL PROTECTED]> wrote:
> > Note that the used test case measures mostly the efficiency > > of the creation of symbolic expressions and the efficiency > > of basic arithmetic operations. > > Very interesting, nice job! > > One thing that I don't understand is this: > > The original sympy, as designed by me, was faster, than any sympy after that? Note that one test case can measure only one aspect of the system. >From one result alone one cannot conclude that one system is faster than the another one. Hence my comment above. So, let me repreat again, the test case measures mainly the performance of creating new symbolic objects. > I remember, that when we moved to the new core, it sped up sympy by a > factor of 10x to 100x, > but it's not visible on the graph at all - is it because it was only > due to caching, that is disabled in that graph? The speed up is visible as a small jump where the graph of sympy- research branch joines the graph of sympy. This jump is small for the given test case but exists thanks to using __new__ method, if I remember correctly. But if one would try the expand test case then the jump would be considerable, mainly because the new core used different expand algorithms. And indeed, when caching is enabled, that is, equal symbolic objects are created only once, then the speed up will be considerable. But for the given test case one needs to disable caching as the test measures the performance of creating symbolic objects, not the performance of the caching feature. > Maybe the test case is not very well chosen? Well, there are no bad test cases. Various systems may perform well or not so well for the given test case. And people may misinterpret results badly. But the test case is just a matter of fact. The given test case illustrates some of the weak points of sympy/ sympycore that are fundamental in the sense that the result of any other test case depends on these points - creating symbolic objects is an expensive operation. The success of sympycore comes just from the fact that it minimizes the need to create new symbolic objects in runtime. That is like a smart alternative to caching symbolic objects option. So, in order to perform better for other test cases, one must first find an improvement to the fundamental one. But I totally agree that more performance tests are needed - this aspect of the development of sympy was totally missing last year. Pearu --~--~---------~--~----~------------~-------~--~----~ You received this message because you are subscribed to the Google Groups "sympy" group. To post to this group, send email to sympy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sympy?hl=en -~----------~----~----~----~------~----~------~--~---