On Tue, May 31, 2011 at 4:22 PM, Simon Slavin <slav...@bigfraud.org> wrote:
> Split the DROP into two stages:
>
> DELETE FROM myTable;
> DROP TABLE myTable;
>
> Which one takes all the time ?  If it's the second one, then perhaps just 
> delete all the records.  Filling the table back up again with new rows 
> obviously already takes considerable time so adding extra time may not be as 
> inconvenient.

That's gonna have the same problem: in order to find out what pages to
add to the free list the engine will have to traverse the interior
nodes of the table's b-tree.

You could rename the table and then delete from ... order by rowid asc
limit 1000, to delete 1000 rows at a time.  Add in incremental
autovacuum and that might do the trick.

You could also re-create the DB in a new file then use the filesystem
to delete the old file -- the filesystem will have to do the same work
to free filesystem blocks -more or less-, but it will do it in the
asynchronously, in the background.

Personally I recommend Roger's VIEW approach to schema changes to
minimize data re-write overhead.

Nico
--
_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to