[sage-support] Re: How to detect memory leaks?

2008-10-29 Thread Simon King

Dear Michael,

On 28 Okt., 15:27, mabshoff [EMAIL PROTECTED]
dortmund.de wrote:
 Can you come up with some simple Cython code using libSingular that
 shows the same behavior, i.e. the more simple the better. This would
 help me potentially hunt down the cause.

Your wish is my command...

It is ticket #4380. It seems to me that the leak will be located in
the `reduce` method of MPolynomial_libsingular.

Cheers
Simon

--~--~-~--~~~---~--~~
To post to this group, send email to sage-support@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sage-support
URLs: http://www.sagemath.org
-~--~~~~--~~--~--~---



[sage-support] Re: How to detect memory leaks?

2008-10-29 Thread mabshoff



On Oct 29, 2:28 am, Simon King [EMAIL PROTECTED] wrote:
 Dear Michael,

 On 28 Okt., 15:27, mabshoff [EMAIL PROTECTED]

 dortmund.de wrote:
  Can you come up with some simple Cython code using libSingular that
  shows the same behavior, i.e. the more simple the better. This would
  help me potentially hunt down the cause.

Hi Simon,

 Your wish is my command...

:)

 It is ticket #4380. It seems to me that the leak will be located in
 the `reduce` method of MPolynomial_libsingular.

*really* nice catch. I am testing the patch right now and it looks
like a positive review. Interestingly Guppy would not have caught the
memleak either since it is inside Singular, i.e. omalloc screws us
here. Hans once showed me some debug tricks to hunt for leaks via
omalloc, so if this turns out to be a harder problems than we thought
we might want to go that way. Just as I mentioned on the ticket if you
find anything else please open a new ticket since I want this patch to
go in and it seems to resolve the vast majority of the problem in your
code.

 Cheers
         Simon

Cheers,

Michael
--~--~-~--~~~---~--~~
To post to this group, send email to sage-support@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sage-support
URLs: http://www.sagemath.org
-~--~~~~--~~--~--~---



[sage-support] Re: How to detect memory leaks?

2008-10-28 Thread Simon King

Dear team,

On Oct 27, 12:15 pm, Simon King [EMAIL PROTECTED] wrote:
snip
 So, it seems to me that theleakmight come from other compiled
 components.
 Libsingular? This is what I'm using most frequently.

Now i am sure that the leak is in libsingular.

I produced an F5 version that thoroughly uses Singular via pexpect,
but makes no use of libsingular. That's the only change. See
http://sage.math.washington.edu/home/SimonKing/f5/f5S.pyx

I know that I use the pexpect interface in a very inefficient way. So
it is very slow -- but there is no leak!

However, as my knowledge of libsingular, guppy and valgrind tends to
zero, I doubt that I will be able to solve the problem.

Cheers
   Simon

--~--~-~--~~~---~--~~
To post to this group, send email to sage-support@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sage-support
URLs: http://www.sagemath.org
-~--~~~~--~~--~--~---



[sage-support] Re: How to detect memory leaks?

2008-10-27 Thread Simon King

Hi!

On Oct 25, 11:07 pm, Simon King [EMAIL PROTECTED] wrote:
 At http://sage.math.washington.edu/home/SimonKing/f5/f5.pyx is the
 latest version, i.e., the one with counters in __init__, __del__ and
 __dealloc__ (which of course should eventually be removed).

Sorry, meanwhile I found that I was mistaken about __del__ versus
__dealloc__: There is no __del__ method for extension types.

My conjecture was that the memory leak was caused by (one of) the two
extension types in my code. But now it seems that the problem is
located somewhere else.

At http://sage.math.washington.edu/home/SimonKing/f5/f5.pyx is the
version with extension classes, and at 
http://sage.math.washington.edu/home/SimonKing/f5/f5B.pyx
is essentially the same code with Python classes replacing the
extension types. The memory leak is still present with Python
classes.

So, it seems to me that the leak might come from other compiled
components.
Libsingular? This is what I'm using most frequently.

Valgrinding revealed a definite loss of 612 bytes and a potential loss
of 394,644 bytes in two runs. See 
http://sage.math.washington.edu/home/SimonKing/f5/sage-memcheck.2402

Best regards
   Simon

--~--~-~--~~~---~--~~
To post to this group, send email to sage-support@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sage-support
URLs: http://www.sagemath.org
-~--~~~~--~~--~--~---



[sage-support] Re: How to detect memory leaks?

2008-10-25 Thread Simon King

Dear Michael and all others,

I tried 3 approaches to trac my problem down. Summary: It seems that
my extension class is deallocated (__dealloc__ is called) but not
deleted (__del__ is not called), see approach 3 below.

1.
On Oct 24, 6:03 pm, mabshoff [EMAIL PROTECTED]
dortmund.de wrote:

 You need to rebuild Python after exporting SAGE_VALGRIND=yes -
 otherwise pymalloc is used and as is the valgrind log is useless in
 some regards.

Thanks.
I did rebuild sage 3.1.4 with SAGE_VALGRIND=yes, installed the
optional valgrind spkg, and tried again.

After computing F=F5(); G=F(I) I had
sage: get_memory_usage()
1125.95703125

and after repeating the same computation 10 more times in a loop, it
was
sage: get_memory_usage()
1197.0703125

This time, valgrind found a tiny bit of unreachable memory, see
http://sage.math.washington.edu/home/SimonKing/f5/sage-memcheck.25789
==25789== LEAK SUMMARY:
==25789==definitely lost: 697 bytes in 17 blocks.
==25789==  possibly lost: 399,844 bytes in 1,033 blocks.
==25789==still reachable: 38,600,299 bytes in 332,837 blocks.
==25789== suppressed: 337,860 bytes in 5,348 blocks.

However, this still does not explain the loss of 71 MB reported by
get_memory_usage

2.
 Debugging these is hard and
 valgrind will not help much in that case. Much more useful could be
 Guppy.

I installed guppy. hpy told me that after the first round of my
computation I had (only indicating those items that increased)
Index  Count   % Size   % Cumulative  % Kind (class / dict of
class)
1 42  15 4264  16 13480  50 tuple
4 19   7 2280   8 21416  79 unicode
5  3   1 1608   6 23024  85 dict (no owner)
9 11   4  560   2 26312  97 str

But after 10 more runs I got
 1 55  15 7424  20 16640  45 unicode
 2  4   1 4960  13 21600  59 dict (no owner)
 3 44  12 4408  12 26008  71 tuple
 7 46  13 1104   3 34208  93 int
 8 14   4  704   2 34912  95 str

Unfortunately I did not find a guppy tutorial. In particular, I don't
know how to find out where the increased size of the dicts and
unicodes comes from.

3.
Is the following method a correct way of testing whether instances of
my extension classes are deleted at all?
  - In the init method of my class DecoratedPolynomial, I increased
two globally defined counters by one.
  - I provided a custom __del__ method, that did nothing but to reduce
the first counter by one.
  - I provided a custom __dealloc__ method, that simply reduced the
second counter by one.

Result:
The init method was called 429 times (in 11 runs of the computation
and deleting everything in the end), and so was the __dealloc__
method.
But the __del__ method was not called *at all*!

So, could this be at the core of the problem? I thought that a
__dealloc__ method is called only after the __del__ method has done,
or am I mistaken?
To avoid misunderstanding: In my original code I did not provide
custom __del__ or __dealloc__, but expected Cython to do the job.

Cheers
Simon





--~--~-~--~~~---~--~~
To post to this group, send email to sage-support@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sage-support
URLs: http://www.sagemath.org
-~--~~~~--~~--~--~---



[sage-support] Re: How to detect memory leaks?

2008-10-25 Thread mabshoff



On Oct 25, 12:46 pm, Simon King [EMAIL PROTECTED] wrote:
 Dear Michael and all others,

Hi Simon,

 I tried 3 approaches to trac my problem down. Summary: It seems that
 my extension class is deallocated (__dealloc__ is called) but not
 deleted (__del__ is not called), see approach 3 below.

 1.
 On Oct 24, 6:03 pm, mabshoff [EMAIL PROTECTED]

 dortmund.de wrote:

  You need to rebuild Python after exporting SAGE_VALGRIND=yes -
  otherwise pymalloc is used and as is the valgrind log is useless in
  some regards.

 Thanks.
 I did rebuild sage 3.1.4 with SAGE_VALGRIND=yes, installed the
 optional valgrind spkg, and tried again.

 After computing F=F5(); G=F(I) I had
 sage: get_memory_usage()
 1125.95703125

 and after repeating the same computation 10 more times in a loop, it
 was
 sage: get_memory_usage()
 1197.0703125

 This time, valgrind found a tiny bit of unreachable memory, 
 seehttp://sage.math.washington.edu/home/SimonKing/f5/sage-memcheck.25789
 ==25789== LEAK SUMMARY:
 ==25789==    definitely lost: 697 bytes in 17 blocks.
 ==25789==      possibly lost: 399,844 bytes in 1,033 blocks.
 ==25789==    still reachable: 38,600,299 bytes in 332,837 blocks.
 ==25789==         suppressed: 337,860 bytes in 5,348 blocks.

 However, this still does not explain the loss of 71 MB reported by
 get_memory_usage

This is pretty much as expected, but now that pymalloc is no longer
used the log is likely much more readable, since a lot of false
positives are gone.

 2.

  Debugging these is hard and
  valgrind will not help much in that case. Much more useful could be
  Guppy.

 I installed guppy. hpy told me that after the first round of my
 computation I had (only indicating those items that increased)
 Index  Count   %     Size   % Cumulative  % Kind (class / dict of
 class)
     1     42  15     4264  16     13480  50 tuple
     4     19   7     2280   8     21416  79 unicode
     5      3   1     1608   6     23024  85 dict (no owner)
     9     11   4      560   2     26312  97 str

 But after 10 more runs I got
      1     55  15     7424  20     16640  45 unicode
      2      4   1     4960  13     21600  59 dict (no owner)
      3     44  12     4408  12     26008  71 tuple
      7     46  13     1104   3     34208  93 int
      8     14   4      704   2     34912  95 str

 Unfortunately I did not find a guppy tutorial. In particular, I don't
 know how to find out where the increased size of the dicts and
 unicodes comes from.

Yep, that is somewhat of a black art. I am planning to do a talk at SD
11 about Guppy since my project there involves debugging a similar
problem, but I have nothing obvious to contribute here. Is the latest
version of f5.pyx in the wiki so I can play around with it?

 3.
 Is the following method a correct way of testing whether instances of
 my extension classes are deleted at all?
   - In the init method of my class DecoratedPolynomial, I increased
 two globally defined counters by one.
   - I provided a custom __del__ method, that did nothing but to reduce
 the first counter by one.
   - I provided a custom __dealloc__ method, that simply reduced the
 second counter by one.

 Result:
 The init method was called 429 times (in 11 runs of the computation
 and deleting everything in the end), and so was the __dealloc__
 method.
 But the __del__ method was not called *at all*!

Mhh, some Python god need to comment here. But once any python element
goes out of scope and the reference count is zero it gets deallocated.
You could sprinkle your code with manual reference count checks and
see if any one of them is going up and is never decremented when it
should. But that is also a black art IMHO :)

 So, could this be at the core of the problem? I thought that a
 __dealloc__ method is called only after the __del__ method has done,
 or am I mistaken?
 To avoid misunderstanding: In my original code I did not provide
 custom __del__ or __dealloc__, but expected Cython to do the job.

 Cheers
         Simon

Cheers,

Michael
--~--~-~--~~~---~--~~
To post to this group, send email to sage-support@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sage-support
URLs: http://www.sagemath.org
-~--~~~~--~~--~--~---



[sage-support] Re: How to detect memory leaks?

2008-10-25 Thread Simon King

Hi Michael,

On Oct 25, 11:28 pm, mabshoff [EMAIL PROTECTED]
dortmund.de wrote:
snip
  Unfortunately I did not find a guppy tutorial. In particular, I don't
  know how to find out where the increased size of the dicts and
  unicodes comes from.

 Yep, that is somewhat of a black art. I am planning to do a talk at SD
 11 about Guppy since my project there involves debugging a similar
 problem, but I have nothing obvious to contribute here. Is the latest
 version of f5.pyx in the wiki so I can play around with it?

At http://sage.math.washington.edu/home/SimonKing/f5/f5.pyx is the
latest version, i.e., the one with counters in __init__, __del__ and
__dealloc__ (which of course should eventually be removed).

Cheers
Simon

--~--~-~--~~~---~--~~
To post to this group, send email to sage-support@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sage-support
URLs: http://www.sagemath.org
-~--~~~~--~~--~--~---



[sage-support] Re: How to detect memory leaks?

2008-10-24 Thread Simon King

Dear Robert,

On 24 Okt., 06:46, Robert Bradshaw [EMAIL PROTECTED]
wrote:
 Well, memory leaks are always possible, but this should be safe (i.e.  
 if there's a bug, it's in Cython). Can you post what your class  
 definition is?

It is the Toy-F5 that I implemented at Sage Days 10. See
http://sage.math.washington.edu/home/SimonKing/f5/f5.pyx

 Are you running this from the command line?

Yes. The example is:
sage: attach f5.pyx
Compiling /home/SimonKing/f5/f5.pyx...
sage: from sage.rings.ideal import Cyclic
sage: R=PolynomialRing(QQ,'x',5)
sage: I=Cyclic(R)
sage: F=F5()
sage: time G1=F(I)
# removing some protocol output
CPU times: user 0.30 s, sys: 0.02 s, total: 0.32 s
Wall time: 0.32 s
sage: G1==I.groebner_basis()
True
sage: del F
sage: get_memory_usage()
423.21484375
sage: for i in range(100):
F=F5()
G1=F(I)
del F
# removing protocol output
sage: get_memory_usage()
681.00390625
sage: (_-423.21484375)/100
2.5778906250

 try doing the whole operation in a function body  
 and see if the results are the same.

sage: def f(n):
for i in range(n):
F=F5()
G1=F(I)
sage: f(100)
# removing protocol
sage: get_memory_usage()
938.33984375
sage: (_-681.00390625)/100
2.5733593750

So, that's about the same.

I wonder one thing: Could the ~2.58 MB come from the protocol that my
function prints?

Cheers
  Simon

--~--~-~--~~~---~--~~
To post to this group, send email to sage-support@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sage-support
URLs: http://www.sagemath.org
-~--~~~~--~~--~--~---



[sage-support] Re: How to detect memory leaks?

2008-10-24 Thread Simon King

Dear Carlo and all others,

On Oct 23, 5:11 pm, Carlo Hamalainen [EMAIL PROTECTED]
wrote:
 Valgrind is the thing to try:http://wiki.sagemath.org/ValgrindingSage

Sorry, when I read the first lines of that page I thought I had to re-
build Sage from scratch. But later it says that there is an optional
valgrind package for Sage. I installed it and tried sage -valgrind.

Then I did
sage: attach f5.pyx
Compiling /home/king/Projekte/f5/f5.pyx...
sage: from sage.rings.ideal import Cyclic
sage: R=PolynomialRing(QQ,'x',5)
sage: I=Cyclic(R).homogenize()
sage: get_memory_usage()
1013.09765625
sage: F=F5()
sage: G1=F(I)
sage: del F
sage: del G1
sage: get_memory_usage()
1035.59765625
sage: F=F5()
sage: G1=F(I)
sage: del F
sage: del G1
sage: get_memory_usage()
1053.04296875
sage: quit

However, it didn't help much.

The valgrind output available at 
http://sage.math.washington.edu/home/SimonKing/f5/sage-memcheck.18590
says in the summary that ~386kB are possibly lost and nothing is
definitely lost.

I think this doesn't fit to the output of get_memory_usage() above.
Running F=F5()G=F(I) in a loop is soon eating all memory.

Moreover, most of the valgrind output does not refer to my code, as
much as I understand. Hence, there is only  a handful of references to
_home_king_Projekte_f5_f5_pyx_0.c

Can you help me with interpreting the valgrind findings?

Thank you very much
  Simon

--~--~-~--~~~---~--~~
To post to this group, send email to sage-support@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sage-support
URLs: http://www.sagemath.org
-~--~~~~--~~--~--~---



[sage-support] Re: How to detect memory leaks?

2008-10-24 Thread mabshoff



On Oct 24, 8:52 am, Simon King [EMAIL PROTECTED] wrote:
 Dear Carlo and all others,

 On Oct 23, 5:11 pm, Carlo Hamalainen [EMAIL PROTECTED]
 wrote:

  Valgrind is the thing to try:http://wiki.sagemath.org/ValgrindingSage

 Sorry, when I read the first lines of that page I thought I had to re-
 build Sage from scratch. But later it says that there is an optional
 valgrind package for Sage. I installed it and tried sage -valgrind.

You need to rebuild Python after exporting SAGE_VALGRIND=yes -
otherwise pymalloc is used and as is the valgrind log is useless in
some regards.

 Then I did
 sage: attach f5.pyx
 Compiling /home/king/Projekte/f5/f5.pyx...
 sage: from sage.rings.ideal import Cyclic
 sage: R=PolynomialRing(QQ,'x',5)
 sage: I=Cyclic(R).homogenize()
 sage: get_memory_usage()
 1013.09765625
 sage: F=F5()
 sage: G1=F(I)
 sage: del F
 sage: del G1
 sage: get_memory_usage()
 1035.59765625
 sage: F=F5()
 sage: G1=F(I)
 sage: del F
 sage: del G1
 sage: get_memory_usage()
 1053.04296875
 sage: quit

 However, it didn't help much.

 The valgrind output available 
 athttp://sage.math.washington.edu/home/SimonKing/f5/sage-memcheck.18590
 says in the summary that ~386kB are possibly lost and nothing is
 definitely lost.

 I think this doesn't fit to the output of get_memory_usage() above.
 Running F=F5()    G=F(I) in a loop is soon eating all memory.

 Moreover, most of the valgrind output does not refer to my code, as
 much as I understand. Hence, there is only  a handful of references to
 _home_king_Projekte_f5_f5_pyx_0.c

 Can you help me with interpreting the valgrind findings?

The key thing here is that the still reachable amount of memory is
growing which indicates a problem with objects from the python heap
not getting properly deallocated, i.e. a potential reference count
issue when using Cython for example. Debugging these is hard and
valgrind will not help much in that case. Much more useful could be
Guppy.

 Thank you very much
       Simon

Cheers,

Michael
--~--~-~--~~~---~--~~
To post to this group, send email to sage-support@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sage-support
URLs: http://www.sagemath.org
-~--~~~~--~~--~--~---



[sage-support] Re: How to detect memory leaks?

2008-10-23 Thread Carlo Hamalainen

On Thu, Oct 23, 2008 at 4:57 PM, Simon King [EMAIL PROTECTED] wrote:
 Nevertheless, get_memory_usage() shows that
   creating an object of class B,
   doing a computation Result = B(...), and
   deleting B
 results in an increased memory usage of 2.57MB per run.

 What tools do you recommend to tracking that memory leak down?

Valgrind is the thing to try: http://wiki.sagemath.org/ValgrindingSage

-- 
Carlo Hamalainen
http://carlo-hamalainen.net

--~--~-~--~~~---~--~~
To post to this group, send email to sage-support@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sage-support
URLs: http://www.sagemath.org
-~--~~~~--~~--~--~---



[sage-support] Re: How to detect memory leaks?

2008-10-23 Thread mabshoff



On Oct 23, 7:57 am, Simon King [EMAIL PROTECTED] wrote:
 Dear Sage team,

 I have two cdef'd classes (lets call them A and B). They have some
 cdef'd attributes that should be harmless, namely of type int or list
 or dict. The entries of these lists/dicts may be objects of class A
 though.

 Let me emphasize that A and B do not rely on any external (wrapped) C-
 Types. They are entirely built from int, list, dict, object. In
 particular, I am not doing any nasty memory allocation.

 I understood that Cython knows how to allocate and deallocate cdef'd
 attributes of type int, list, dict, object etc. Hence, I expected that
 I do not need to provide __del__ or __dealloc__ methods for A and B.
 And I expected that it is virtually impossible to produce a memory
 leak in such setting.

Famous last words.

 Nevertheless, get_memory_usage() shows that
    creating an object of class B,
    doing a computation Result = B(...), and
    deleting B
 results in an increased memory usage of 2.57MB per run.

Do you also delete Result

 What tools do you recommend to tracking that memory leak down?

 Yours
        Simon

Depending on the situation it can either be guppy or valgrind. I can't
say anything without seeing the code, but I don't really have time to
take a closer look for the next couple days anyway.

Cheers,

Michael
--~--~-~--~~~---~--~~
To post to this group, send email to sage-support@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sage-support
URLs: http://www.sagemath.org
-~--~~~~--~~--~--~---



[sage-support] Re: How to detect memory leaks?

2008-10-23 Thread Simon King

Dear Carlo,

On Oct 23, 5:11 pm, Carlo Hamalainen [EMAIL PROTECTED]
wrote:
 Valgrind is the thing to try:http://wiki.sagemath.org/ValgrindingSage

Thank you!

Do I understand correctly that I can use valgrind only after setting
some environment variable and re-building Sage?

Yours
  Simon
--~--~-~--~~~---~--~~
To post to this group, send email to sage-support@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sage-support
URLs: http://www.sagemath.org
-~--~~~~--~~--~--~---



[sage-support] Re: How to detect memory leaks?

2008-10-23 Thread Carlo Hamalainen

On Thu, Oct 23, 2008 at 5:20 PM, Simon King [EMAIL PROTECTED] wrote:
 Do I understand correctly that I can use valgrind only after setting
 some environment variable and re-building Sage?

I think so but I'm not 100% sure, this seems to be new in 3.1.2 and I
last used Valgrind on an earlier version of Sage.

-- 
Carlo Hamalainen
http://carlo-hamalainen.net

--~--~-~--~~~---~--~~
To post to this group, send email to sage-support@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sage-support
URLs: http://www.sagemath.org
-~--~~~~--~~--~--~---



[sage-support] Re: How to detect memory leaks?

2008-10-23 Thread Robert Bradshaw

On Oct 23, 2008, at 7:57 AM, Simon King wrote:

 Dear Sage team,

 I have two cdef'd classes (lets call them A and B). They have some
 cdef'd attributes that should be harmless, namely of type int or list
 or dict. The entries of these lists/dicts may be objects of class A
 though.

 Let me emphasize that A and B do not rely on any external (wrapped) C-
 Types. They are entirely built from int, list, dict, object. In
 particular, I am not doing any nasty memory allocation.

 I understood that Cython knows how to allocate and deallocate cdef'd
 attributes of type int, list, dict, object etc. Hence, I expected that
 I do not need to provide __del__ or __dealloc__ methods for A and B.
 And I expected that it is virtually impossible to produce a memory
 leak in such setting.

Well, memory leaks are always possible, but this should be safe (i.e.  
if there's a bug, it's in Cython). Can you post what your class  
definition is?


 Nevertheless, get_memory_usage() shows that
creating an object of class B,
doing a computation Result = B(...), and
deleting B
 results in an increased memory usage of 2.57MB per run.

 What tools do you recommend to tracking that memory leak down?

Are you running this from the command line? IPython does lots of  
caching of results--try doing the whole operation in a function body  
and see if the results are the same.

- Robert


--~--~-~--~~~---~--~~
To post to this group, send email to sage-support@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sage-support
URLs: http://www.sagemath.org
-~--~~~~--~~--~--~---