On 08/04/2011, at 7:42 PM, Jaco Breitenbach wrote:

> My application makes use of a SQLite table with a unique index.  During
> normal processing, the unique index is used to ensure no duplicate records
> are entered into the table.  However, at regular intervals large numbers of
> records (millions of records) that are already known to be unique, are
> inserted into the SQLite table from another source.  In order to speed up
> these bulk inserts, I first drop the index on the SQLite table, do the bulk
> insert, and then recreate the index.
> 
> The problem I'm encountering is that dropping of the index can potentially
> take much longer than recreating it.

Are you doing it all within a transaction? eg:

begin immediate;
drop index if exists "My Index";
insert a pile of rows;
create unique index "My Index" on "My Table ("My Column 1", "My Column 2");
commit;

This should shift all the major disk writing to the end of the transaction, 
hopefully speeding the process as a whole.

If the transaction fails, note that although SQLite will rollback the insert 
statement, it doesn't automatically rollback the drop and create statements. I 
think this is a deficiency. But you can just watch the result of each statement 
and, if an error occurs, insert your own rollback.

Tom
BareFeetWare

 --
Comparison of SQLite GUI tools:
http://www.barefeetware.com/sqlite/compare/?ml
--
iPhone/iPad/iPod and Mac software development, specialising in databases
_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to