Looking at your code and considering the size of your database, I've run 
into performance issues under PostgreSQL a few times that may be related... 
they're all usually because very slight text changes to a query can 
drastically alter the server's query planner. 

The cause/fixes were:

1. Indexes often got ignored, ending up with lots of row scans.  To fix, I 
would wrap the "deletion" work with some sql to setup & tear down a new 
index with all the columns present in the query.  That usually tricked it 
into using the new index.
2. Too many fkey checks onto the table for deleted items.  Deferring 
constraint checks often helped.
3. The position of the 'commit' ended up clearing the db's internal cache 
each loop, so a reread of rows/indexes would happen each iteration. 
 Expanding the memory allocation for caching on the server helped handle 
that.
4. Sometimes there was a mix of all the above and/or an issue with the size 
of the query (so many values).  What often helped was replacing the DELETE 
with INSERTS of ids to delete into a new temporary table, then deleting 
from that table.


-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.

Reply via email to