On Tue, 2012-05-08 at 00:50 +0100, Graham Higgins wrote:
> specifically:
specifically:
http://docs.python.org/library/xml.sax.utils.html#xml.sax.saxutils.XMLGenerator
--
Graham Higgins
http://bel-epa.com/gjh/
signature.asc
Description: This is a digitally signed message part
On Sun, 2012-05-06 at 19:14 -0600, John W. Shipman wrote:
> On Mon, 7 May 2012, Graham Higgins wrote:
> | AIUI, the standard approach to handling large XML files is to use a
> | stream processor such as SAX.
> Since when does SAX have an OUTPUT option?! I just looked at
> www.saxproject.org, and
On 05/08/2012 12:42 AM, Jonathan Vanasco wrote:
Yes. but if you're at the size where it needs to be a SOA , you
pretty much need to have that daemon running nonstop. So you have a
single process that is 'eternally' allocated 256MB (or whatever) and
does all the grunt work - and you never run i
You make a very valid point there, yes. I wasn't fully thinking about
it, and it's one of those things that have to be considered up front.
.oO V Oo.
On 05/08/2012 12:36 AM, Jonathan Vanasco wrote:
My point is that the "machine" version will need to be pegged to
certain API versions - where
On May 7, 11:50 am, "Vlad K." wrote:
> On 05/07/2012 05:37 PM, Jonathan Vanasco wrote:
>
> > - eventually i would refactor the code to use a SOA setup and have a
> > dedicated daemon handle the large stuff.
>
> But doesn't that suffer from the same set of problems? If the (daemon)
> process pers
My point is that the "machine" version will need to be pegged to
certain API versions - where consumers can expect to see certain
data, and hope to see other data.
The human version can constantly evolve, but the machine version needs
to be static and documented.
--
You received this message be
On 05/07/2012 06:01 PM, Bill Seitz wrote:
I agree, the human-UI needs to be optimized for an elegant UX, the API
needs to be kept logically simple.
For the API, you should be able to automate a lot of the CRUD
renderings by introspection on your db structure, equivalent to how
Rails/Django a
The hack that I used to get around this problem was to catch the exception
and try the connection a second time in the first function used in each
page (that loading the user in that case). This woke up the MySQL server
and did the trick.
On 7 May 2012 19:46, Mike Orr wrote:
> > On one site I en
I had pool_recycle set but TBH it was a value much higher than the
server's connection timeout value was at. I just changed it so we'll see
if this makes a difference!
Thank you for the input :)
Randall Leeds writes:
> On Sun, May 6, 2012 at 5:48 PM, Parnell Springmeyer wrote:
>> I'm getting a
On Mon, May 7, 2012 at 8:44 AM, Jonathan Vanasco wrote:
> i generally hate this approach to web programming.
>
> i find it very shortsighted and unmanageable for those you're trying
> to service - if you change your website, the API more often than not
> causes apps to break.
I wouldn't go as far
> On one site I ended up polling the DB via a cron job. Nasty hack, but
> it got the job done when none of the other options described worked.
I used to have a lightly-used site that would sometimes go several
days without a request, so I had cron restart the application every 8
hours.
--
Mike O
On Mon, May 7, 2012 at 11:50 AM, Vlad K. wrote:
> 1. build XML in smaller chunks to a tempfile, yield it back to client
I have taken this approach in the past. Indeed if you use
tempfile.TemporaryFile (
http://docs.python.org/library/tempfile.html#tempfile.TemporaryFile) this
will work as you e
On 7 May 2012 02:48, Parnell Springmeyer wrote:
> I have pool, pool recycle, and max overflow set... Is there something I'm
> missing? Or maybe it's the DB being hit really hard, or maybe a DB
> configuration option???
I've been through this a few times, and the main cause of this is the
server n
I agree, the human-UI needs to be optimized for an elegant UX, the API
needs to be kept logically simple.
For the API, you should be able to automate a lot of the CRUD renderings by
introspection on your db structure, equivalent to how Rails/Django
auto-generate an admin interface.
For more detai
On 05/07/2012 05:37 PM, Jonathan Vanasco wrote:
- eventually i would refactor the code to use a SOA setup and have a
dedicated daemon handle the large stuff.
But doesn't that suffer from the same set of problems? If the (daemon)
process persists in memory, it doesn't matter if it's the wsgi
i generally hate this approach to web programming.
i find it very shortsighted and unmanageable for those you're trying
to service - if you change your website, the API more often than not
causes apps to break. i see this popular in the rails community where
not many projects last long.
if you'r
fwiw, I used to run into issues like this a lot under mod_perl. the
apache process would lay claim to all the memory it ever used until a
restart ( or max children is reached ).
I used a few workarounds when i needed large data processing :
- i called an external process and collected the results
On 05/06/2012 10:11 PM, Ben Bangert wrote:
I found this presentation quite fascinating regarding how
Python handles memory:
http://revista.python.org.ar/2/en/html/memory-fragmentation.html
Oh yeah, I've seen that presentation before, it is very interesting.
Which version of Python? You mig
Hi all.
I am building an application that will be used both by humans and
machines ("via an API") and I am making it RESTful. I've decided to
design it as one big API where human-operated browser is just another
client to the API. With that I'm actually mixing user interface and data
in req
19 matches
Mail list logo