Le lundi 11 janvier 2010 à 15:55 -0800, diana a écrit :
> And now for a question about a completely different app (no sharding,
> very simple). I haven't got a sufficient response from the pylons
> group, so I'm trying here.
> 
> The question:
> 
> http://groups.google.com/group/pylons-discuss/browse_thread/thread/cb48d0ea2b084159

Well if you only want to count entries, use Query.count(), not
Query.all(). It will be much more efficient, both on the DB side and on
the Python side. Even if you use the entries one by one but don't need
to keep them in memory afterwards, just use the iterative form (`for row
in query: ...`).

Regardless, you are fetching 90000 objects and witnessing a 160MB
increase in memory. This gives approximately 1.7KB per objects.
Depending on the size and complexity of each row this is not necessarily
surprising. Python will generally not be as memory-efficient as
hand-tailored structures written in C, since there is a lot of
genericity and flexibility in most Python datatypes. Objects in general
can be quite big, because they are based on dictionaries (dict objects)
which are themselves big.

As for releasing memory, try to call gc.collect() after you have
released the last reference to the result set. I'd be a bit surprised if
SQLAlchemy created reference cycles, though -- and would be inclined to
consider it a bug ;-)



-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.


Reply via email to