Re: [sqlite] Database replication question

2007-06-11 Thread Joe Wilson
Large bulk inserts with more than one index (implicit or explicit) 
is not SQLite's strong suit.

If you search the mailing list archives you'll find a few suggestions:

- "BEGIN EXCLUSIVE" (or is it "BEGIN IMMEDIATE"?) on the 
  database file and then copy the file over - fastest way

or

- increasing cache sizes 
- pre-sorting the data in index order prior to bulk insert
- creating the other indexes after all the data is inserted

If you do not require a live backup you could use the copy trick
and augment that with a daily archive via 

 sqlite3 file.db .dump | gzip etc...

in case the database file becomes corrupted.

--- [EMAIL PROTECTED] wrote:
> I am trying to put in place a simple replication process to copy a database 
> from one machine to
> an other.
...
> My question is:
> 
> is there a way to do a select or a .dump so that when inserting the data on 
> the other end,
> things will be faster? Or maybe there are some pragmas I can use that would 
> improve performance?



 

Expecting? Get great news right away with email Auto-Check. 
Try the Yahoo! Mail Beta.
http://advision.webevents.yahoo.com/mailbeta/newmail_tools.html 

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Database replication question

2007-06-11 Thread spaminos-sqlite
> - Original Message 
> From: Joe Wilson <[EMAIL PROTECTED]>
> To: sqlite-users@sqlite.org
> Sent: Monday, June 11, 2007 8:36:32 PM
> Subject: Re: [sqlite] Database replication question
> 
> 
> Large bulk inserts with more than one index (implicit or explicit) 
> is not SQLite's strong suit.
> 
> If you search the mailing list archives you'll find a few suggestions:
> 
> - "BEGIN EXCLUSIVE" (or is it "BEGIN IMMEDIATE"?) on the 
>   database file and then copy the file over - fastest way

What do you mean by "copy the file over"? A straight copy of the binary content 
of the file? If so, I can't really do that because the version of sqlite are 
potentially different on the two machines.

> 
> or
> 
> - increasing cache sizes 
> - pre-sorting the data in index order prior to bulk insert
> - creating the other indexes after all the data is inserted
> 
> If you do not require a live backup you could use the copy trick
> and augment that with a daily archive via 
> 
> sqlite3 file.db .dump | gzip etc...
> 
> in case the database file becomes corrupted.

If the performance problem is with the seconday index, is there a way to 
"pause" indexing before a large bulk insert and then "resume" it later without 
rebuilding the entire index (to avoid doing: drop index + inserts + create 
index)? Maybe it's a stupid question, but I am guessing that there is some sort 
of version number for the rows in the db, so playing "catchup" on an index 
could work?

Nicolas

Re: [sqlite] Database replication question

2007-06-12 Thread drh
[EMAIL PROTECTED] wrote:
> 
> The table I have is something like
> CREATE TABLE sn2uid(
>sn VARCHAR(100) NOT NULL,
>uid INTEGER NOT NULL,
>PRIMARY KEY (sn)
> );
> CREATE INDEX uidindex on sn2uid ( uid )
> 
> 
> is there a way to do a select or a .dump so that when inserting the 
> data on the other end, things will be faster? 

Do BEGIN EXCLUSIVE, then copy the disk file, then do COMMIT.

--
D. Richard Hipp <[EMAIL PROTECTED]>


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Database replication question

2007-06-12 Thread John Stanton

[EMAIL PROTECTED] wrote:

- Original Message 
From: Joe Wilson <[EMAIL PROTECTED]>
To: sqlite-users@sqlite.org
Sent: Monday, June 11, 2007 8:36:32 PM
Subject: Re: [sqlite] Database replication question


Large bulk inserts with more than one index (implicit or explicit) 
is not SQLite's strong suit.


If you search the mailing list archives you'll find a few suggestions:

- "BEGIN EXCLUSIVE" (or is it "BEGIN IMMEDIATE"?) on the 
 database file and then copy the file over - fastest way



What do you mean by "copy the file over"? A straight copy of the binary content 
of the file? If so, I can't really do that because the version of sqlite are potentially 
different on the two machines.



or

- increasing cache sizes 
- pre-sorting the data in index order prior to bulk insert

- creating the other indexes after all the data is inserted

If you do not require a live backup you could use the copy trick
and augment that with a daily archive via 


sqlite3 file.db .dump | gzip etc...

in case the database file becomes corrupted.



If the performance problem is with the seconday index, is there a way to "pause" indexing before a 
large bulk insert and then "resume" it later without rebuilding the entire index (to avoid doing: 
drop index + inserts + create index)? Maybe it's a stupid question, but I am guessing that there is some sort 
of version number for the rows in the db, so playing "catchup" on an index could work?

Nicolas
If you have incompatible Sqlite versions you can still perform a 
snapshot replication by doing a file copy then running a background job 
to dump the old version database and rebuild it to the latest version.


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Database replication question

2007-06-12 Thread drh
[EMAIL PROTECTED] wrote:
> > - Original Message 
> > From: Joe Wilson <[EMAIL PROTECTED]>
> > To: sqlite-users@sqlite.org
> > Sent: Monday, June 11, 2007 8:36:32 PM
> > Subject: Re: [sqlite] Database replication question
> > 
> > 
> > Large bulk inserts with more than one index (implicit or explicit) 
> > is not SQLite's strong suit.
> > 
> > If you search the mailing list archives you'll find a few suggestions:
> > 
> > - "BEGIN EXCLUSIVE" (or is it "BEGIN IMMEDIATE"?) on the 
> >   database file and then copy the file over - fastest way
> 
> What do you mean by "copy the file over"? A straight copy of the
> binary content of the file? If so, I can't really do that because
> the version of sqlite are potentially different on the two machines.

The binary file format for SQLite version 3 is (usually) the same
for all versions going back to version 3.0.0.  The exception is if
you use some of the newer features introduced in later versions,
then earlier versions of the library might not be able to read
the file.  But your schema does not appear to be using any advanced
features, so I think you will always be OK.

One of the key features of SQLite is that we work really hard
to preserve backwards compatibility of both API and file format.

--
D. Richard Hipp <[EMAIL PROTECTED]>


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Database replication question

2007-06-12 Thread Joe Wilson
> If the performance problem is with the seconday index, is there a way to 
> "pause" indexing before
> a large bulk insert and then "resume" it later without rebuilding the entire 
> index (to avoid
> doing: drop index + inserts + create index)?

No


 

The fish are biting. 
Get more visitors on your site using Yahoo! Search Marketing.
http://searchmarketing.yahoo.com/arp/sponsoredsearch_v2.php

-
To unsubscribe, send email to [EMAIL PROTECTED]
-