On Dec 18, 2011, at 5:53 PM, rivka wrote:

> So - actually investigated it thoroughly - and here are the results:
> 
> My database size on disk is 362MB and includes the main table and
> multiple one to many associated tables. I am querying the main table
> (which has little info in itself - mainly id, an integer value and a
> string value (mostly less than 100 characters) jointly with a one to
> many relationship from the main table.
> I run it through ipython and the program starts from ~23MB, and toward
> the very end of the query - it soars to 582MB ! And that is when the
> query involves only 100000 rows out of the total of 1.2M rows in the
> data base. So - I am very confused about why the memory explosion.
> What occupies those 550MB of memory? surely not the data from the DB
> which is probably less than 50MB total...
> That explains the behavior that I have observed when attempting larger
> queries - the memory explodes and the system shifts to using VM -
> which is basically - working with the HD and churning memory and
> caches non-stop.
> 
> I need to get control over the memory size so that I can hold a larger
> query in the memory and work directly with memory.

You'd definitely need to forego using the ORM and build a very memory efficient 
datastructure that suits your needs.

it sounds like you might benefit by reorganizing your data into simple 
dictionaries and lists and just using Redis:  http://redis.io/

Since you're looking for all the data to be "in memory", you should just get it 
out of the relational database as the first step, then work with an optimized 
structure that suits your needs.  redis is designed for this use case.


-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.

Reply via email to