> From: Stephan Buchert [mailto:stephanb007 at gmail.com]
> Sent: Saturday, May 07, 2016 12:10 AM
> Copying the WAL files is probably more efficient than the SQL text solutions
> (considering that roughly 5 GB of binary data are weekly added), and it seems
> easy to implement, so I'll probably t
Thanks for the suggestions.
The session extension would work, all tables have rowids. The added
flexibility to insert/update into the database copies first
independently of each other is valuable.
Tracing the inserts/updates seems also a good idea, available with the
present Sqlite version.
Cop
lite-users-boun...@mailinglists.sqlite.org] On Behalf Of Jim Morris
Sent: Freitag, 06. Mai 2016 20:14
To: sqlite-users at mailinglists.sqlite.org
Subject: Re: [sqlite] Incremental backup/sync facility?
Doesn't this eliminate the use of prepared statements?
On 5/6/2016 11:10 AM, Jeffrey Mat
Le 14:43 06/05/2016,Simon Slavin ?crit:
>On 6 May 2016, at 1:32pm, Stephan Buchert wrote:
>
>> The largest database file has now grown to about 180 GB. I need to have
>> copies of the files at at least two different places. The databases are
>> updated regularly as new data from the satellites be
On 6 May 2016, at 3:40pm, Gerry Snyder wrote:
> One feature of SQLite -- the whole database in one file -- is normally an
> advantage but becomes less so when the file is huge.
Believe me. It's still a huge advantage. Have you ever tried to copy a MySQL
database off a non-working server by
We are using Sqlite for data from satellite Earth observations. It
works very well. Thanks to everybody contributing to Sqlite, uppermost
Dr. Hipp.
The largest database file has now grown to about 180 GB. I need to have
copies of the files at at least two different places. The databases are
updat
On 5/6/16, Hick Gunter wrote:
> No,you just have to log the bound parameters and a reference to the prepared
> statement (so the other side will know which statement to prepare).
> Or just log the statement & the parameters each time.
The sqlite3_trace() interface fills in the values for the para
On 6 May 2016, at 1:32pm, Stephan Buchert wrote:
> The largest database file has now grown to about 180 GB. I need to have
> copies of the files at at least two different places. The databases are
> updated regularly as new data from the satellites become available.
>
> Having the copies of the
As an aside, this is how Apple syncs Core Data to iCloud (and then to multiple
iOS devices) if the backing store uses SQLite (the default). When a small
amount of data changes (which is common), the changes get send out, not the
entire (mostly unchanged and potential huge) database.
Jeff
> O
o "current".
-Original Message-
From: sqlite-users-bounces at mailinglists.sqlite.org
[mailto:sqlite-users-boun...@mailinglists.sqlite.org] On Behalf Of Stephan
Buchert
Sent: Freitag, 06. Mai 2016 14:32
To: sqlite-users at mailinglists.sqlite.org
Subject: [sqlite] Incremental backup
Doesn't this eliminate the use of prepared statements?
On 5/6/2016 11:10 AM, Jeffrey Mattox wrote:
> As an aside, this is how Apple syncs Core Data to iCloud (and then to
> multiple iOS devices) if the backing store uses SQLite (the default). When a
> small amount of data changes (which is com
Gerry;
I trashed the email I was going to send. You had the same line of thought
as me in regards to chopping the file on a per-day basis, but, what made me
trash it was any auto-numbered PKs that would be a hassle in new files,
unless that information was put into the new DB upon creation.
I ag
On 06/05/16 05:32, Stephan Buchert wrote:
> Having the copies of the file synced becomes increasingly tedious
> as their sizes increase. Ideal would be some kind of
> incremental backup/sync facility.
Out of curiousity, would an approach of using multiple databases and
using ATTACH to "unify" them
On 5/6/16, Richard Hipp wrote:
> On 5/6/16, Simon Slavin wrote:
>>
>> Believe it or not, the fastest way to synchronise the databases is not to
>> synchronise the databases. Instead you keep a log of the instructions
>> used
>> to modify the database.
>
> Or, this might be an even better solutio
On 5/6/16, Simon Slavin wrote:
>
> Believe it or not, the fastest way to synchronise the databases is not to
> synchronise the databases. Instead you keep a log of the instructions used
> to modify the database.
Or, this might be an even better solution. Note that the
sqlite3_trace() function (
On 5/6/16, Stephan Buchert wrote:
>
> A kind of hack-ish solution might be to update the primary database
> files in WAL mode, copy only the WAL file to the secondary place,
> and force there WAL checkpoint. Would this work?
>
This sounds like the most promising solution to me. We'll think on it
On 5/6/2016 5:32 AM, Stephan Buchert wrote:
> We are using Sqlite for data from satellite Earth observations. It
> works very well. Thanks to everybody contributing to Sqlite, uppermost
> Dr. Hipp.
>
> The largest database file has now grown to about 180 GB.
One feature of SQLite -- the whole data
17 matches
Mail list logo