On 05/17/2012 02:44 PM, Ceri Storey wrote:
Also, it's worth bearing in mind that heapy will only record objects
allocated and tracked by python, which will of course be a subset of
the total memory allocated to python by the os (which is what the
virtual size (VIRT) measures). I'd guess that i
On Tuesday, May 15, 2012 2:54:09 PM UTC+1, Vlad K. wrote:
>
>
> Okay, followup on this problem.
>
> I've now replaced one large lxml.etree root with chunked writing into a
> tempfile. The XML view basically does this (debug heapy output included):
>
> [...]
> The idle, just started applicatio
Okay, followup on this problem.
I've now replaced one large lxml.etree root with chunked writing
into a tempfile. The XML view basically does this (debug heapy
output included):
hp = guppy.hpy()
print " BEFORE TEMPFILE"
On Tue, 2012-05-08 at 00:50 +0100, Graham Higgins wrote:
> specifically:
specifically:
http://docs.python.org/library/xml.sax.utils.html#xml.sax.saxutils.XMLGenerator
--
Graham Higgins
http://bel-epa.com/gjh/
signature.asc
Description: This is a digitally signed message part
On Sun, 2012-05-06 at 19:14 -0600, John W. Shipman wrote:
> On Mon, 7 May 2012, Graham Higgins wrote:
> | AIUI, the standard approach to handling large XML files is to use a
> | stream processor such as SAX.
> Since when does SAX have an OUTPUT option?! I just looked at
> www.saxproject.org, and
On 05/08/2012 12:42 AM, Jonathan Vanasco wrote:
Yes. but if you're at the size where it needs to be a SOA , you
pretty much need to have that daemon running nonstop. So you have a
single process that is 'eternally' allocated 256MB (or whatever) and
does all the grunt work - and you never run i
On May 7, 11:50 am, "Vlad K." wrote:
> On 05/07/2012 05:37 PM, Jonathan Vanasco wrote:
>
> > - eventually i would refactor the code to use a SOA setup and have a
> > dedicated daemon handle the large stuff.
>
> But doesn't that suffer from the same set of problems? If the (daemon)
> process pers
On Mon, May 7, 2012 at 11:50 AM, Vlad K. wrote:
> 1. build XML in smaller chunks to a tempfile, yield it back to client
I have taken this approach in the past. Indeed if you use
tempfile.TemporaryFile (
http://docs.python.org/library/tempfile.html#tempfile.TemporaryFile) this
will work as you e
On 05/07/2012 05:37 PM, Jonathan Vanasco wrote:
- eventually i would refactor the code to use a SOA setup and have a
dedicated daemon handle the large stuff.
But doesn't that suffer from the same set of problems? If the (daemon)
process persists in memory, it doesn't matter if it's the wsgi
fwiw, I used to run into issues like this a lot under mod_perl. the
apache process would lay claim to all the memory it ever used until a
restart ( or max children is reached ).
I used a few workarounds when i needed large data processing :
- i called an external process and collected the results
On 05/06/2012 10:11 PM, Ben Bangert wrote:
I found this presentation quite fascinating regarding how
Python handles memory:
http://revista.python.org.ar/2/en/html/memory-fragmentation.html
Oh yeah, I've seen that presentation before, it is very interesting.
Which version of Python? You mig
On Mon, 7 May 2012, Graham Higgins wrote:
+--
| (deletia)
| The discussion on the issue ticket associated with the commit:
|
| http://bugs.python.org/issue11849
|
| contains further details, including an example of exactly how creating
On Sun, 2012-05-06 at 01:42 +0200, Vlad K. wrote:
> As I understand it Python won't release internally released memory back
> to OS.
Glad to read that you have an acceptable solution.
The issue piqued my curiosity and I found this from June 2010: "we have
recently experienced problems with Pyth
On 5/5/12 4:42 PM, Vlad K. wrote:
> As I understand it Python won't release internally released memory back
> to OS.
This is actually no longer the case, I believe this behavior was fixed
in Python 2.6 or so. If you're curious about Python memory allocation,
there's also another interesting phen
>
>
> On 05/06/2012 07:31 AM, Roberto De Ioris wrote:
>> But why moving away from uWSGI if it already give you all of the
>> features
>> you need to bypass your problem without installing other softwares ?
>> (remember, uWSGI is not about speed as lot of people still think, it is
>> about features
On 05/06/2012 08:55 AM, Malthe Borch wrote:
On 6 May 2012 04:27, Vlad K. wrote:
I've got about 20 XML file formats to construct, each produced by its own
(pluggable) module because each defines its own format etc... Switching to a
file-based partial generation would mean a massive rewrite. I g
On 05/06/2012 07:31 AM, Roberto De Ioris wrote:
But why moving away from uWSGI if it already give you all of the features
you need to bypass your problem without installing other softwares ?
(remember, uWSGI is not about speed as lot of people still think, it is
about features)
As options:
--
On 6 May 2012 04:27, Vlad K. wrote:
> I've got about 20 XML file formats to construct, each produced by its own
> (pluggable) module because each defines its own format etc... Switching to a
> file-based partial generation would mean a massive rewrite. I guess this is
> one of the situations where
>
>
> I was hoping there's a way to safely kill a wsgi process from within it,
> I could do that only when such largish XML files are requested, or
> something else not obvious to me. Doesn't have to be uwsgi, though.
But why moving away from uWSGI if it already give you all of the features
you
I don't think it's a leak because the consumed memory is constant, ie.
the max requested at certain point in the life of the process, and it
stays there for same data set regardless of how often it is requested.
I'm not constructing the XML with strings directly but using lxml (and
not xml.e
On 05/05/2012 07:42 PM, Vlad K. wrote:
Hi all.
As I understand it Python won't release internally released memory back
to OS. So it happens in my Pyramid app that has to process/construct
rather largish XML files occasionally, the memory consumption jumps to
several hundred MB and stays there
Hi all.
As I understand it Python won't release internally released memory back
to OS. So it happens in my Pyramid app that has to process/construct
rather largish XML files occasionally, the memory consumption jumps to
several hundred MB and stays there for good or until I recycle the
proc
22 matches
Mail list logo