On 3 Feb 2013, at 6:34pm, Paul Sanderson <sandersonforens...@gmail.com> wrote:

> I want to populate a large table (millions of rows) as quickly as possible,
> 
> The data set will not be operated on until the table is fully populated and
> if the operation fails I will be starting again. the operation is a one off
> and the table will not be added to at a future date.
> 
> What are optimisations will work best for me?

Don't worry about hacking into SQLite.  You should initially try it doing 
something like

DROP all your indexes
DELETE FROM myTable
BEGIN
INSERT ...
INSERT ...
INSERT ...
...
END
CREATE whatever indexes you want

If that proves not to be fast enough for you, you can mess with making a new 
transaction every 100,000 rows or something: just do "END; BEGIN" every 
so-often.  If it's still not fast enough you can mess with 'PRAGMA journal_mode 
= OFF'.

But note that these things may actually slow things down: whether they take 
longer or shorter depends on the relative speeds of your memory and mass 
storage hardware, and on the bandwidth and address switching built into your 
motherboard.  And it will take longer for you to do the experiments with your 
specific setup to figure out which one is fastest then it will take to just set 
up one of them and let it run.

So don't try to wring the 'best' speed out of your setup.  You can definitely 
assess 'fast enough', but it's pointless trying to assess 'fastest'.

Simon.
_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to