Okay, after several test cases, various join combinations with or without relationships, with or without cherrypicking columns that are really used from the joined models, I've come to the conclusion that the only problem I'm having here is that there is no garbage collection. Python memory use just keeps growing at a rate that, of course, depends on the size of models used and data queried, but it just keeps growing, regardless of release/deletion of instances or isolating one row processing in its own committed transaction.

I also found this:

http://permalink.gmane.org/gmane.comp.python.sqlalchemy.user/30087


So it appears I'm having the same problem.


Am I understanding correctly that because of this, SQLAlchemy ORM is in my case useless if I have to process thousands of rows, because the memory used to process each row (along with corresponding joined models etc...) will not be released? So basically I'd have to use SQLA without the ORM, for this particular use case?

Or is this some memory leak bug?

If so, any suggestions, examples on how do I switch from ORM use to non-ORM if I want to retain the named tuples returned by queries and avoid rewriting half the app?


Thanks.


.oO V Oo.


--
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.

Reply via email to