Dear experts,

My application makes use of a SQLite table with a unique index.  During
normal processing, the unique index is used to ensure no duplicate records
are entered into the table.  However, at regular intervals large numbers of
records (millions of records) that are already known to be unique, are
inserted into the SQLite table from another source.  In order to speed up
these bulk inserts, I first drop the index on the SQLite table, do the bulk
insert, and then recreate the index.

The problem I'm encountering is that dropping of the index can potentially
take much longer than recreating it.  In a test I've run this morning, it
took 53 minutes to complete the "DROP INDEX my_unique_index", but only 9
minutes to recreate it.  Looking at the documentation for "DROP INDEX" I've
noticed that it says, "The index is completely removed from the disk."  I
can only assume that that is the reason why dropping the index could take so
long.

Are there any alternatives to dropping the index?  Ideally I only want to
disable it, without actually removing it from the file, and incurring the
unnecessary cost of rewriting the data file.

Any ideas or comments would be much appreciated.

Regards,
Jaco
_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to