Re WAL mode trick.
I think you would want to complete a checkpoint and then do the backup,
ensuring that no check-points are done during your backup time. This way,
you know that your committed transactions prior to the backup are in the
file being backed up.
regards,
Adam
On Sat, May 7, 2016
Hi Rob,
I think Clemens suggestion may be worth investigating, in case you do not
want to stop the updates (which probably means a change in your workflow
and some effort at other places anyways).
I think this may work:
- Use WAL, and turn off automatic checkpointing
(https://www.sqlite.org/wal.h
On 4 May 2016 at 20:39, Rob Willett wrote:
> Clemens,
>
> We have 8GB of memory which is the most our VPS provider allows. We?d like
> 32GB but its not an option for us. Our desktops have more than that, but
> the VPS provider is reasonably priced :)
>
> We hadn?t considered the WAL mode, my conc
Yes, realised after I sent the e-mail that I hadn?t said we had
created the destination file in which to run the command. You cannot
actually run rsync ?inlace if the destination file doesn?t exist, we
found that out earlier ;) So it was a reasonable test though only one
test. We need to do mor
Mmmm?. Initial tests are not conclusive, it does look as if using
rsync ?-inplace does speed things up but nowhere near as much as we
anticipated.
Testing consisted of a 6GB test database which is a backup copy from a
few months ago.
We timed copying the database over using cp
# time cp tfl.s
Hi,
We did look at this before, and discarded the idea but I can?t
remember why.
I?ve just looked again and seen the ?in-place option which I
wasn?t aware of. That *might* help and be an interesting solution. We
know we can make cp wrk, though with a little downtime. We?ll
investigate rsync ?
I use rsync to backup a 400MB sqlite db to a remote server. This is not
fast (which is a don't care problem in my context). You may want to test
changes to a 'log of sql commands at database' to get a replay-backup
remotely as it handles text better than binary files (at least the version
I use d
On Thu, May 5, 2016 at 9:38 AM, Rob Willett
wrote:
> Mmmm?. Initial tests are not conclusive, it does look as if using rsync
> ?-inplace does speed things up but nowhere near as much as we anticipated.
>
> Testing consisted of a 6GB test database which is a backup copy from a few
> months ago.
>
>
Instead of cp, rsync might help it is able to send delta changes.
On Wed, May 4, 2016 at 10:55 AM, Rob Willett
wrote:
> Scott,
>
> OK, We can see how to do this (I think). Our app is written in Perl and we?d
> just need to capture the command we write down. The only issue I can think
> of is the
On 05/05/2016 12:45 AM, Rob Willett wrote:
> Ryan,
>
> Ah! The penny drops, we didn?t realise that with the backup API.
> That explains a great deal. We must have missed that in the docs. Blast.
There is one exception to this:
If the database is written to via the same database handle that
On 2016/05/04 8:38 PM, Dominique Devienne wrote:
> On Wed, May 4, 2016 at 5:51 PM, Scott Robison
> wrote:
>
This is going to become a bigger problem for us as the database will
only get bigger so any advice welcomed.
>> Perhaps, rather than backing up the live data, you create an appen
On Wed, May 4, 2016 at 5:51 PM, Scott Robison
wrote:
> > > This is going to become a bigger problem for us as the database will
> > > only get bigger so any advice welcomed.
>
> Perhaps, rather than backing up the live data, you create an append only
> log of each and every query you send to the
On 2016/05/04 2:35 PM, Rob Willett wrote:
> Dominque,
>
> We put together a quick C program to try out the C API a few weeks
> ago, it worked but it was very slow, from memory not much different to
> the sqlite command line backup system. We put it on the back burner as
> it wasn?t anywhere ne
Dan,
Thats NOT the case for us so that explains why things are slow. Mmm?
as I recall we never did get a backup to finish?. Now we know why :)
Rob
On 4 May 2016, at 18:53, Dan Kennedy wrote:
> On 05/05/2016 12:45 AM, Rob Willett wrote:
>> Ryan,
>>
>> Ah! The penny drops, we didn?t realise
Scott,
OK, We can see how to do this (I think). Our app is written in Perl and
we?d just need to capture the command we write down. The only issue I
can think of is the prepare statement and making sure we capture the
right SQL command. W
We?ll dig into it and have a look,
Thanks for taking t
Scott,
Thats an interesting idea. Is there an option in SQLite to do this for
us, or do we have to write a small shim in our app?
I like the idea of this as its simple and elegant.
Rob
On 4 May 2016, at 16:51, Scott Robison wrote:
>>> This is going to become a bigger problem for us as the dat
Ryan,
Ah! The penny drops, we didn?t realise that with the backup API.
That explains a great deal. We must have missed that in the docs. Blast.
We?ve looked around for other providers in Europe and the cost
differences are very high. We need to be in the EU for various data
protection reas
Rob,
I have a continuously running application that simply writes the same data to
two different databases; one in the primary location and the other in the
backup location. This is obviously not a perfect solution (a man with two
watches never really knows what time it is) but it's good enough
On Wed, 04 May 2016 11:44:17 +0100
"Rob Willett" wrote:
> Hi,
>
> We think we know the answer to this, but we?ll ask the question
> anyway.
>
> We?re trying to backup a 10GB live running database
> ?as-fast-as-we-possibly-can? without stopping updates coming in. The
> updates come every 2-3
Simon,
Thanks for the reply, we were a little surprised at the UNIX cp speed as
well. We investigated it with the provider in the past over other file
system speed issues and it turned out that they rate limit the IO ops,
so you can?t consume them all. Our provider runs many servers out of
the
On 4 May 2016, at 1:35pm, Rob Willett wrote:
> I think that unless we can get the C API to back up in a time close to that
> of a cp, the easiest solution is to suspend updates for 10-15 mins
> out-of-hours and do a simple cp from there. Sometimes a change in workflow
> might be the easiest a
Rob Willett wrote:
> We?re trying to backup a 10GB live running database ?as-fast-as-we-
> possibly-can? without stopping updates coming in.
How much memory do you have? I guess you can't simply read the entire
database file to force it into the file cache?
In WAL mode, a writer does not block r
Clemens,
We have 8GB of memory which is the most our VPS provider allows. We?d
like 32GB but its not an option for us. Our desktops have more than
that, but the VPS provider is reasonably priced :)
We hadn?t considered the WAL mode, my conclusion is that a simple
change to our workflow is actu
Dominque,
We put together a quick C program to try out the C API a few weeks ago,
it worked but it was very slow, from memory not much different to the
sqlite command line backup system. We put it on the back burner as it
wasn?t anywhere near quick enough.
We hadn?t seen or found or even looke
On Wed, May 4, 2016 at 1:26 PM, Dominique Devienne
wrote:
> On Wed, May 4, 2016 at 1:22 PM, Dominique Devienne
> wrote:
>> On Wed, May 4, 2016 at 1:13 PM, Rob Willett
>> wrote:
>>> Thanks for the reply,
>>>
>>> Yes Example 2 in https://www.sqlite.org/backup.html is what we are talking
>>> abou
On Wed, May 4, 2016 at 1:22 PM, Dominique Devienne
wrote:
> On Wed, May 4, 2016 at 1:13 PM, Rob Willett
> wrote:
>> Thanks for the reply,
>>
>> Yes Example 2 in https://www.sqlite.org/backup.html is what we are talking
>> about. It was very slow to run for us.
>
> Then maybe https://www.sqlite.o
On Wed, May 4, 2016 at 12:38 PM, Dominique Devienne
wrote:
> On Wed, May 4, 2016 at 5:51 PM, Scott Robison
> wrote:
>
> > > > This is going to become a bigger problem for us as the database will
> > > > only get bigger so any advice welcomed.
> >
> > Perhaps, rather than backing up the live data
On Wed, May 4, 2016 at 1:13 PM, Rob Willett
wrote:
> Thanks for the reply,
>
> Yes Example 2 in https://www.sqlite.org/backup.html is what we are talking
> about. It was very slow to run for us.
Then maybe https://www.sqlite.org/rbu.html is your last change.
Although I don't see how it could be f
On Wed, May 4, 2016 at 12:44 PM, Rob Willett
wrote:
> We?re trying to backup a 10GB live running database
[...]
> 2. If we use the command line sqlite .dump > it
> works, but its very slow.
That's going to SQL text.
While .backup is page-based, and binary. But not incremental in the
Shell I be
Thanks for the reply,
Yes Example 2 in https://www.sqlite.org/backup.html is what we are
talking about. It was very slow to run for us.
Rob
On 4 May 2016, at 12:08, Dominique Devienne wrote:
> On Wed, May 4, 2016 at 12:44 PM, Rob Willett
> wrote:
>> We?re trying to backup a 10GB live running
On Wed, May 4, 2016 at 11:47 AM, Rob Willett
wrote:
> Scott,
>
> Thats an interesting idea. Is there an option in SQLite to do this for us,
> or do we have to write a small shim in our app?
>
> I like the idea of this as its simple and elegant.
It would require a little extra work on your part.
Hi,
We think we know the answer to this, but we?ll ask the question
anyway.
We?re trying to backup a 10GB live running database
?as-fast-as-we-possibly-can? without stopping updates coming in. The
updates come every 2-3 mins, and write a chunk of data in. We can?t
really stop the database upd
> > This is going to become a bigger problem for us as the database will
> > only get bigger so any advice welcomed.
Perhaps, rather than backing up the live data, you create an append only
log of each and every query you send to the database. Should you need to
restore, you replay the log of stat
33 matches
Mail list logo