One transaction like you did is best.

I recently ran a test which ran pretty well with a commit every 1M records.
Doing every 100,000 records slowed things down dramatically.

-----Original Message-----
From: sqlite-users-boun...@sqlite.org
[mailto:sqlite-users-boun...@sqlite.org] On Behalf Of
joe.fis...@tanguaylab.com
Sent: Monday, December 31, 2012 12:32 PM
To: sqlite-users@sqlite.org
Subject: [sqlite] 1.1GB database - 7.8 million records

Very impressive. With SQLite 3.7.14.1
Took 4 minutes to load a 1.5GB MySQL dump with 7.8 million records.
Count(*) takes 5 seconds. Even runs on a USB key. Wow!
Also loaded a smaller one (33MB database [30 tables/dumps] in 10 
seconds, largest file had 200,000 records).

I wrapped the 7.8 million records in one [BEGIN TRANSACTION;] [COMMIT 
TRANSACTION;] block.
Had to use VIM to edit the file.
Using the Transaction is significantly faster with a large number of 
inserts.
What's the rule of thumb on how many records per transaction?
Does it matter how many are used, is one transaction OK?

Joe Fisher
_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to