On Tue, 5 Aug 2008 18:57:10 +0200, you wrote:

>Hi everyone,
>i need to backup a sqlite database on a remote device for  
>configuration redundancy purposes. Due to a really slow connection  
>between the devices triggering rsync is not the best solution yet.  
>Although rsync is efficient it is transfering kilobytes of data to see  
>what it has to update. Because the updates are always one-way (from  
>the live database to the backup database) it must be sufficient only  
>to transfer the updates.
>I don't get the right idea for my application. Something like  
>capturing the querys that update the database and transmit them to the  
>remote system would fit the purpose, but that seems to me very  
>complicated to teach the database.

I would add a "last update date time stamp" column to every
table you want to synchronize and maintain it using a
trigger (on update and on insert).
Then SELECT the rows which were updated since the last
synchronisation, in a format proper for import at the other
side.

Alternatively, at every synchronisation, .dump your
database, and run a diff on the .dump result and the
previous version of the .dump output.
Transfer the diff and use it to patch the other side (also
.dump format). After the patch, create a new database with
the patched .dump file.
Initially you transfer a full dump to build your initial
target database.
Update the status on the source machine after a successful
update of the target. 

You need quite some smart scripting. Not easy, but probably
feasible.

Both methods only work if your schema is stable.

>I'm stuck with my problem so perhaps here is someone with a really  
>clever idea.
>
>Sorry for my bad english,

No problem, it's clear.

>greetings from hamburg, germany
>Till

Greetings from Rotterdam, Netherlands.
-- 
  (  Kees Nuyt
  )
c[_]
_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to