Hi List,
        More of a curiosity.
I'm doing some general data munging and set off a query that consists
entirely of 37 DROP TABLEs in it. The database it's running against is
a bit less than 1GB made of about 5 million rows, and the tables being
dropped constitute about 99% of the content.
        
        My questions is - why does it take so long? The total time required
to create this dataset (most of which was processing on the Python
side) was about 11 minutes.
        
        The total time required to perform these drops is ... well I
cancelled it at 20mins - it had deleted 20 of the 37. For that entire
period SQLite has been reading at a rate of 170MB/s - by my maths it
had read about 200GB!
        
        The tables don't have indexes, the settings are all whatever the 
defaults are.
        
        Any suggestions what's going on? Is this normal behavior?
        Thanks,
        Jonathan

-- 
This transmission is intended for the named addressee(s) only and may 
contain confidential, sensitive or personal information and should be 
handled accordingly. Unless you are the named addressee (or authorised to 
receive it for the addressee) you may not copy or use it, or disclose it to 
anyone else. If you have received this transmission in error please notify 
the sender immediately. All email traffic sent to or from us, including 
without limitation all GCSX traffic, may be subject to recording and/or 
monitoring in accordance with relevant legislation.
_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to