Hello,
I'm using sqlite as a data storage backend for my log parsing application.
I have around 7 milion - equals to 1GB of binary log (up to 35 mln.)
records to insert at once, I'm using prepared statments, huge
transactions, and optimised (I hope) pragma settings: PRAGMA
journal_mode = OFF; PRAGMA cache_size = 50000; PRAGMA temp_store =
MEMORY. But still inserting the whole dataset takes more than 8
minutes :( .
Note1: DB is too big for :memory:.

I have no need to modify or even store the data set after insertion, I
only need fast lookups and sorts.

Is there ANYTHING I can do to tweak it? (I don't mind digging in code) Help :(
Is SQLite really a good choice for this application?


BTW. Inserting smaller dataset, like 250MB =~ 2mln records., takes
only 20 seconds. Why is the difference so huge?
_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to