[EMAIL PROTECTED] wrote:
----- Original Message ----
From: Joe Wilson <[EMAIL PROTECTED]>
To: sqlite-users@sqlite.org
Sent: Monday, June 11, 2007 8:36:32 PM
Subject: Re: [sqlite] Database replication question


Large bulk inserts with more than one index (implicit or explicit) is not SQLite's strong suit.

If you search the mailing list archives you'll find a few suggestions:

- "BEGIN EXCLUSIVE" (or is it "BEGIN IMMEDIATE"?) on the database file and then copy the file over - fastest way


What do you mean by "copy the file over"? A straight copy of the binary content 
of the file? If so, I can't really do that because the version of sqlite are potentially 
different on the two machines.


or

- increasing cache sizes - pre-sorting the data in index order prior to bulk insert
- creating the other indexes after all the data is inserted

If you do not require a live backup you could use the copy trick
and augment that with a daily archive via
sqlite3 file.db .dump | gzip etc...

in case the database file becomes corrupted.


If the performance problem is with the seconday index, is there a way to "pause" indexing before a 
large bulk insert and then "resume" it later without rebuilding the entire index (to avoid doing: 
drop index + inserts + create index)? Maybe it's a stupid question, but I am guessing that there is some sort 
of version number for the rows in the db, so playing "catchup" on an index could work?

Nicolas
If you have incompatible Sqlite versions you can still perform a snapshot replication by doing a file copy then running a background job to dump the old version database and rebuild it to the latest version.

-----------------------------------------------------------------------------
To unsubscribe, send email to [EMAIL PROTECTED]
-----------------------------------------------------------------------------

Reply via email to