Hello Simon !

Thanks for reply !

I already know your suggestions and if you look at the database schema and the program used to insert data you can see that there is no unnecessary indices active and all inside transactions.

About the vacuum I also understand the need to rewrite the whole database but I'm not sure if it's really necessary to do almost 5 times the database size in both reads and writes (also an equivalent amount of I/O happened during insertions).

Cheers !


On 01/10/16 18:12, Simon Slavin wrote:
On 1 Oct 2016, at 9:27pm, Domingo Alvarez Duarte <mingo...@gmail.com> wrote:

I'm using sqlite (trunk) for a database (see bellow) and for a final database file of 
22GB a "vacuum" was executed and doing so it  made a lot of I/O ( 134GB reads 
and 117GB writes in 2h:30min).

Can something be improved on sqlite to achieve a better performance ?
VACUUM rewrites the entire database.  It will always do a lot of IO.  You 
should never need to use VACUUM in a production setting.  Perhaps in a 
once-a-year maintenance utility but not in normal use.

The fastest way to do lots of insertion is

DROP all INDEXes
DELETE FROM all TABLEs
Do your insertions, bundling up each thousand (ten thousand ?
        depends on your system) uses of INSERT in a transaction
if you really want to do VACUUM, do it here
reCREATE all your INDEXes
ANALYZE

(the ANALYZE will also do lots of IO, not as much as VACUUM, but it may speed 
up all WHERE / ORDER BY clauses).

Simon.
_______________________________________________
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users

_______________________________________________
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to