Thanks for your thoughts!
I was wondering which will be more economical in terms of memory and cpu
1. Let SQLA manage cascades.
- query.all() brings the entire table into memory. I am trying to figure out
how to manage it with cursors.
- There is a processing overhead for SQLA to convert RDBMS
One solution is to change the commit strategy; issue commits periodically
during the loop.
parentlist = session.query(Parent).all()
count = 0
for parent in parentlist:
session.delete(parent)
count += 1
if count % 100 == 0 # use whatever frequency is needed
count = 0
Another solution is to use triggers in SQLite to enforce FK
relationships.
http://www.sqlite.org/cvstrac/wiki?p=ForeignKeyTriggers
http://code.google.com/p/sqlitefktg4sa/
On May 29, 6:59 am, Mike Conley mconl...@gmail.com wrote:
One solution is to change the commit strategy; issue commits
Randy,
Interesting approach to foreign key management. Harish indicates he is
having a problem with restricted memory. Won't that still be true with
triggers? After all, if the problem is that the transaction is too big, it
will still be too big with all the pending deletes executed in a trigger.
Mike,
Well...I am not sure. I thought SQLite held transaction details in
a .journal file and not in memory. I thought that the memory use
might actually be a Python problem and not a result of SQLite. If my
thoughts are correct, using the FK approach should keep deleting the
children in
Harish said it was an embedded system, probably all resources are pretty
severely restricted.
--
Mike Conley
On Fri, May 29, 2009 at 9:44 AM, Randy Syring ra...@rcs-comp.com wrote:
Mike,
Well...I am not sure. I thought SQLite held transaction details in
a .journal file and not in
Ah...missed that part. Well, the triggers are fired per-row I
believe. So, if you don't use a transaction explicitly, and just let
SQLite run without one, it will be slower, but the resource usage
should be much less.
On May 29, 10:31 am, Mike Conley mconl...@gmail.com wrote:
Harish said it