Unfortunately I do not have Linux or much time to invest in researching and 
learning an alternative to Valgrind :/

My current workaround, which works very well, is to move my scipy part of the 
script to its own script and then use os.system() to call it with the 
appropriate arguments.


Thanks everyone for the replies! Is there a proper way to close the thread?


-Joe

-----Original Message-----
From: numpy-discussion-boun...@scipy.org 
[mailto:numpy-discussion-boun...@scipy.org] On Behalf Of Julian Taylor
Sent: Wednesday, January 29, 2014 11:53 AM
To: Discussion of Numerical Python
Subject: Re: [Numpy-discussion] Memory leak in numpy?

On 29.01.2014 20:44, Nathaniel Smith wrote:
> On Wed, Jan 29, 2014 at 7:39 PM, Joseph McGlinchy <jmcglin...@esri.com> wrote:
>> Upon further investigation, I do believe it is within the scipy code 
>> where there is a leak. I commented out my call to 
>> processBinaryImage(), which is all scipy code calls, and my memory 
>> usage remains flat with approximately a 1MB variation. Any ideas?
> 
> I'd suggest continuing along this line, and keep chopping things out 
> until you have a minimal program that still shows the problem -- 
> that'll probably make it much clearer where the problem is actually 
> coming from...
> 
> -n

depending on how long the program runs you can try running it under massif the 
valgrind memory usage proftool, that should give you a good clue where the 
source is.

_______________________________________________
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion

_______________________________________________
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion

Reply via email to