I have a database migration using sqlalchemy migrate that needs to migrate 
several million rows.
After running for several days, I see the job has grown to consume all 
available memory and 
is swapping.   

Can someone point me to a description of what is the best way to manage 
memory in the session
during a long transaction?
i.e.

1.  Does  session.flush remove instances of the objects in the session
2.  Should I break the migration into several subtransactions?
3.  use expunge at different times?

Any help appreciated.

Thanks

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/sqlalchemy/-/hV16vWaRiqEJ.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.

Reply via email to