I have a scenario where I want to move 99+% of the records from one database
to another, initially empty but for a set of table definitions (in practice,
copied from a template file). On my Linux platform, I find that the INSERT
INTO archive.my_table SELECT * FROM my_table WHERE (...) takes unreasonably
long (it involves about 30MB of data).

What I would rather do is: 1) move the current database file from its
current location to the archive location, 2) create a new current database
(from the same template I use now for the archive) and 3) copy back, from
archive to current, the rows that should *not* be archived (deleting them
from the archive afterward).

Clearly, I'll need to create a lock on the current database before moving
it, but I can foresee complications related to the "behind-the curtain"
filesystem operations being performed. If someone has worked out all the
pitfalls of this scenario, I'd appreciate a recipe.

Thanks,

Chris
_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to