On 23.04.2013 16:30, Ronan Lamy wrote:
Le 22/04/2013 10:27, Tom Bachmann a écrit :
I attach the output. Here is my reading:
- simplify spends 70% in together and powsimp
- together spends 99% in object creation
- powsimp spends 95% in object creation
- AssocOp.__new__ spends 90% in flatten
- f
On Tue, Apr 23, 2013 at 9:30 AM, Ronan Lamy wrote:
> Le 22/04/2013 10:27, Tom Bachmann a écrit :
>
>
>> I attach the output. Here is my reading:
>>
>> - simplify spends 70% in together and powsimp
>> - together spends 99% in object creation
>> - powsimp spends 95% in object creation
>> - AssocOp._
Le 22/04/2013 10:27, Tom Bachmann a écrit :
I attach the output. Here is my reading:
- simplify spends 70% in together and powsimp
- together spends 99% in object creation
- powsimp spends 95% in object creation
- AssocOp.__new__ spends 90% in flatten
- flatten has now obvious hotspot
What I c
Yes, I saw that.
On 23.04.2013 00:15, Chris Smith wrote:
Note that flatten is already optimized for the case of adding or
multiplying by a rational since we always know where that arg is.
--
You received this message because you are subscribed to the Google
Groups "sympy" group.
To unsubscribe
Note that flatten is already optimized for the case of adding or
multiplying by a rational since we always know where that arg is.
--
You received this message because you are subscribed to the Google Groups
"sympy" group.
To unsubscribe from this group and stop receiving emails from it, send an
On 22.04.2013 23:27, Aaron Meurer wrote:
On Mon, Apr 22, 2013 at 2:54 PM, Tom Bachmann wrote:
On 22.04.2013 17:54, Aaron Meurer wrote:
Another way to optimize might be to check for common noop inputs and
figure out how to efficiently short circuit them.
Hm. It seems that (at least in the c
On Mon, Apr 22, 2013 at 2:54 PM, Tom Bachmann wrote:
> On 22.04.2013 17:54, Aaron Meurer wrote:
>>
>> Another way to optimize might be to check for common noop inputs and
>> figure out how to efficiently short circuit them.
>>
>
> Hm. It seems that (at least in the case I'm testing) by far the mos
On 22.04.2013 17:54, Aaron Meurer wrote:
Another way to optimize might be to check for common noop inputs and
figure out how to efficiently short circuit them.
Hm. It seems that (at least in the case I'm testing) by far the most
common case is two-argument flatten, where one of the arguments
On 22.04.2013 12:10, Stefan Krastanov wrote:
What I can see is that flatten spends considerable amounts of time in
things
which recursively call flatten *again* (i.e. things like a *= b). This
seems
to indicate that it can probably benefit from memoization caching.
Or it might be simpler and
Another way to optimize might be to check for common noop inputs and
figure out how to efficiently short circuit them.
Aaron Meurer
On Apr 22, 2013, at 3:27 AM, Tom Bachmann wrote:
> Hi,
>
> I used line_prof on the following script:
>
> --
>>> What I can see is that flatten spends considerable amounts of time in
>>> things
>>> which recursively call flatten *again* (i.e. things like a *= b). This
>>> seems
>>> to indicate that it can probably benefit from memoization caching.
>>
>>
>> Or it might be simpler and as effective to have t
On 22.04.2013 10:54, Stefan Krastanov wrote:
What I can see is that flatten spends considerable amounts of time in things
which recursively call flatten *again* (i.e. things like a *= b). This seems
to indicate that it can probably benefit from memoization caching.
Or it might be simpler and as
> What I can see is that flatten spends considerable amounts of time in things
> which recursively call flatten *again* (i.e. things like a *= b). This seems
> to indicate that it can probably benefit from memoization caching.
Or it might be simpler and as effective to have this half-baked-memoiza
Oh I should say I profiled the following functions:
AssocOp.__new__
Mul.flatten
simplify
powsimp
_together (nested function in `together`)
On 22.04.2013 10:27, Tom Bachmann wrote:
Hi,
I used line_prof on the following script:
---
fro
The test case is just
q = symbols('q')
z = (q*(-sqrt(-2*(-(q - S(7)/8)**S(2)/8 - S(2197)/13824)**(S(1)/3) -
S(13)/12)/2 - sqrt((2*q - S(7)/4)/sqrt(-2*(-(q - S(7)/8)**S(2)/8 -
S(2197)/13824)**(S(1)/3) - S(13)/12) + 2*(-(q - S(7)/8)**S(2)/8 -
S(2197)/13824)**(S(1)/3) - S(13)/6)/2 - S(1)/4) + q/4
On Fri, Apr 19, 2013 at 12:41 AM, Tom Bachmann wrote:
> On 19.04.2013 00:36, Aaron Meurer wrote:
>>
>> On Tue, Apr 16, 2013 at 2:09 PM, Tom Bachmann wrote:
>>>
>>> Hi guys,
>>>
>>> I did some investigations on issues with disabling caching. Basically, I
>>> ran
>>> the tests in all our 323 test f
On 19.04.2013 00:36, Aaron Meurer wrote:
On Tue, Apr 16, 2013 at 2:09 PM, Tom Bachmann wrote:
Hi guys,
I did some investigations on issues with disabling caching. Basically, I ran
the tests in all our 323 test files, both with and without the cache, and
timed each file separately. In each case
On Tue, Apr 16, 2013 at 2:09 PM, Tom Bachmann wrote:
> Hi guys,
>
> I did some investigations on issues with disabling caching. Basically, I ran
> the tests in all our 323 test files, both with and without the cache, and
> timed each file separately. In each case, I used the --timeout=60 parameter
If so, how does what you say differ from what I described?
In my experience, caching bugs are very non-local. You do something
and then much later, in a completely different calculation, something
goes wrong. That's why the most annoying bugs are only reproducible by
running the whole test suite
On Wed, Apr 10, 2013 at 2:14 PM, Tom Bachmann wrote:
> On 10.04.2013 17:55, Aaron Meurer wrote:
>>>
>>> What modules rely most heavily on caching? I know that series does
>>> very much, and meijerint does to some extent. What about other modules?
>>
>>
>> Let's not forget the most important module
On 10.04.2013 17:55, Aaron Meurer wrote:
What modules rely most heavily on caching? I know that series does
very much, and meijerint does to some extent. What about other modules?
Let's not forget the most important module of all, the core. I know
expand relies on the cache for most of its spee
On Apr 10, 2013, at 3:55 AM, Tom Bachmann wrote:
This is essentially a continuation of my email "General strategy for
removing old assumptions".
One fairly evident problem with replacing the old assumptions system by the
new one in a naive way is caching. Recall that sympy has a "global cache".
On 10.04.2013 12:14, Ronan Lamy wrote:
The global cache, as it stands, is bad. Two name only two issues, it
can grow in long computations, and it has often lead to very subtle
bugs (when some function was not truly side-effect free).
Any other general objections to the global cache?
There is al
On 10/04/2013 11:55, Tom Bachmann wrote:
This is essentially a continuation of my email "General strategy for
removing old assumptions".
One fairly evident problem with replacing the old assumptions system
by the new one in a naive way is caching. Recall that sympy has a
"global cache". Any f
This is essentially a continuation of my email "General strategy for
removing old assumptions".
One fairly evident problem with replacing the old assumptions system by
the new one in a naive way is caching. Recall that sympy has a "global
cache". Any function decorated with @cacheit will benef
25 matches
Mail list logo