On Tue, May 31, 2011 at 14:29:11 -0500, Nico Williams wrote:
> Just a guess: finding all the pages to free requires traversing the
> internal nodes of the table's b-tree, which requires reading a fair
> subset of the table's b-tree, which might be a lot of I/O.  At 150MB/s
> it would take almost two minutes to read 15GB of b-tree pages from a
> single disk, and that's assuming the I/Os are sequential (which they
> will almost certainly not be).  So you can see why the drops might be
> slow.

Might well be. Individual tables are not as big as 15GiB, but 0.5GiB/table
can be (the whole file regularly grows to maybe 35GiB, but there are many
tables and a couple of indices on them and I didn't try to dig out how much
each takes).

> One workaround would be to rename the tables to drop and dropping them
> later, when you can spare the time.

There is no such time. Besides that would mean the pages would not be
available for the next table, making the file even larger and even more
fragmented.

> Longer term it'd be nice if SQLite3 could free a dropped table's pages
> incrementally rather than all at once, assuming my guess above is
> correct anyways.

Regards,
Jan

-- 
                                                 Jan 'Bulb' Hudec <b...@ucw.cz>
_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to