Okay, seems two things are going on here. First, I am probably abusing relationships and should reconsider their use. Second, the "memory leak" is still happening, the memory usage just keeps growing regardless of the fact that per each of 2000 iterations I spawn an instance, modify it, flush back to disk and/or even commit, expunge or explicitly delete the objects at the end of iteration.

I must note that I'm using this in a Pyramid app which uses Transaction to wrap around SQLA's transaction management. So maybe tehre's the problem, although I don't see how because Transaction does not know or care about individual session model instances, no?

I also seem to misunderstand the relationships. By default the model defines lazy="joined" which is what I need it to do normally. In this particular iteration I do not need relationships, so I add .options(noload(...)) where for ... I tried everything: relationship names as strings, as class attributes, or even "*". It still constructs massive joined queries, as if either noload() can't override lazy="joined" or there's something else that would override the noload() given directly on the query:

                    gt = session.query(GatewayTransaction)\
                                .options(noload("*"))\
.filter(GatewayTransaction.realestate_id==row.realestate_id)\
                                .filter(GatewayTransaction.portal_id==k)\
                                .first() or GatewayTransaction()



.oO V Oo.


On 02/22/2012 02:30 AM, Vlad K. wrote:
It is very, very slow, it takes minutes to process 2000 rows and memory usage skyrockets into multiple GB range and I have to terminate it before it starts swapping like hell. With lazy="select", it flies fast, done in a couple of seconds with very little memory consumed, because at this point there are no rows in the table so nothing is additionally selected, instead inserted. Still, why would a join slow things down so drastically and shoot Python memory usage (not DB's) skyhigh?

Also, even if I try session.expunge(gt) or expunge_all() (previously preparing the "row" to be loaded one by one from a list of IDs), the memory always keeps growing, as if the instance do not die, never get garbage collected...



--
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.

Reply via email to