On Tuesday, 10 September, 2019 09:26, mailing lists <mailingli...@skywind.eu> 
wrote:

>I cannot really put all the inserts into one transaction because in case of
>a failure I loose all the already inserted data. 

Why is that important?  Cannot you just load it again from whence it came in 
the first place on failure?  

>There is hardly any performance gain anymore when doing 1000 or 10 000
>insertions in one transaction including immediate insertion into indices
>(in my case the difference is in the per cent range).

I find that the difference between (a) "inserting all the data into a table 
with indexes in a single transaction"; (b) "inserting all the data into a table 
without indexes in a single transaction and then create the indexes"; and, (d) 
"within a single transaction drop the indexes, insert all the data, then create 
the indexes" is:

(b) is about 10% faster than (a)
(c) is about 40& faster than (a)

smaller batch sizes result in more random I/O and performance decreases as the 
batch size decreases.

-- 
The fact that there's a Highway to Hell but only a Stairway to Heaven says a 
lot about anticipated traffic volume.



_______________________________________________
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to