On Sat, Aug 16, 2014 at 2:41 PM, Jonathan Moules <
jonathanmou...@warwickshire.gov.uk> wrote:

> Hi List,
>         More of a curiosity.
> I'm doing some general data munging and set off a query that consists
> entirely of 37 DROP TABLEs in it. The database it's running against is
> a bit less than 1GB made of about 5 million rows, and the tables being
> dropped constitute about 99% of the content.
>
>         My questions is - why does it take so long? The total time required
> to create this dataset (most of which was processing on the Python
> side) was about 11 minutes.
>
>         The total time required to perform these drops is ... well I
> cancelled it at 20mins - it had deleted 20 of the 37. For that entire
> period SQLite has been reading at a rate of 170MB/s - by my maths it
> had read about 200GB!
>
>         The tables don't have indexes, the settings are all whatever the
> defaults are.
>
>         Any suggestions what's going on? Is this normal behavior?
>


I made a copy of a 2.3GB database that contained two tables that comprised
90% of the database space.  I did a DROP TABLE on each.  Each DROP TABLE
took about 100 milliseconds, real-time.

DROP TABLE bmdTileTable;
Run Time: real 0.109 user 0.052133 sys 0.043098
DROP TABLE bmdTileTable_with_out;
Run Time: real 0.102 user 0.052688 sys 0.029648

Maybe you have autovacuum turned on in your database?  Autovacuum makes
doing things like DROP TABLE much slower because it has to rearrange
content in order to move it all to the front of the file then truncate the
file.


-- 
D. Richard Hipp
d...@sqlite.org
_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to