Back in my past I used Oracle OCI and did "array" inserts where you
would load an array for
each column to be inserted. bind the arrays to the insert statement and
then do a big insert.
It was a quite fast way to load data.
Joe Wilson wrote:
Some people on the list have noted that inserting pre-sorted
rows in sub-batches into SQLite is faster than inserting
unsorted rows. Granted, you could only do this for one index
per table, but might this be the basis of an optimization?
(I have not looked at the insert code. Perhaps SQLite is
already doing this for all I know.)
Would this be a bad time to ask for multi-row insert support? :-)
INSERT INTO tbl_name(a,b,c) VALUES(1,2,3),(4,5,6),(7,8,9);
The way indices work in SQLite is that there is one row
in the index for each row in the table but the index rows
are in index order. If the indexed values are randomly
distributed in the table, that means building the index
requires inserting each row in a random spot in the middle
of the index. Constantly inserting things in random places
means that there is no locality of reference and the pager
cache does not perform well. It is not clear to me what
can be done about this other than to completely redesign
how indices work. And even then, I cannot think of an
alternative design that would do much better.
__________________________________________________
Do You Yahoo!?
Tired of spam? Yahoo! Mail has the best spam protection around
http://mail.yahoo.com