Dan,

Thats NOT the case for us so that explains why things are slow. Mmm? 
as I recall we never did get a backup to finish?. Now we know why :)

Rob

On 4 May 2016, at 18:53, Dan Kennedy wrote:

> On 05/05/2016 12:45 AM, Rob Willett wrote:
>> Ryan,
>>
>> Ahhhhh! The penny drops, we didn?t realise that with the backup 
>> API. That explains a great deal. We must have missed that in the 
>> docs. Blast.
>
>
> There is one exception to this:
>
> If the database is written to via the same database handle that is 
> being used as the source db by the backup API, then the backup is not 
> restarted. In this case if any pages that have already been 
> transferred to the backup db are modified the new versions are written 
> into the backup db at the same time as the source is updated.
>
> Dan.
>
>
>
>
>
>>
>> We?ve looked around for other providers in Europe and the cost 
>> differences are very high. We need to be in the EU for various data 
>> protection reasons. Until now we haven?t had any issues as we 
>> don?t move a significant amount of data around in a very short 
>> period of time, so the rate limited IO has not been a problem.
>>
>> One of our options is to do what you suggest with a second database 
>> server and run them hot/warm. We had already thought of that but not 
>> got around to it as the setting up time is quite high (we need a bank 
>> of servers, feeding things from one server to another), but our 
>> immediate issue is simply copying the 10GB database. The downside of 
>> the second server is moving 10GB data files around the internet 
>> afterwards back to the failed server. Rebuilding from scratch is a 
>> pain as it takes around 2-3 weeks to rebuild the database from 
>> scratch as we have to process every file again (circa 200,000) in 
>> order and each file takes around 4-8 secs to run.
>>
>> I think the backup solution is the tried and tested Keep-It-Simple 
>> shell script. We pause the queue upstream which stops the update 
>> process, do a cp and then restart the queue again. All of this is 
>> doable in shell script.
>>
>> Rob
>>
>> On 4 May 2016, at 18:22, R Smith wrote:
>>
>>> On 2016/05/04 2:35 PM, Rob Willett wrote:
>>>> Dominque,
>>>>
>>>> We put together a quick C program to try out the C API a few weeks 
>>>> ago, it worked but it was very slow, from memory not much different 
>>>> to the sqlite command line backup system. We put it on the back 
>>>> burner as it wasn?t anywhere near quick enough.
>>>
>>> You do realize that the backup API restarts the backup once the 
>>> database content changes, right? I'm sure at the rates you describe 
>>> and update frequency, that backup would never finish. The backup API 
>>> is quite fast if your destination file is on a not-too-slow drive, 
>>> but you will have to stop the incoming data to allow it to finish.
>>>
>>> As an aside - you need a better provider, but that said, and if it 
>>> was me, I would get two sites up from two different providers, one 
>>> live, one stand-by, both the cheap sort so costs stay minimal 
>>> (usually two cheap ones are much cheaper than the next level beefy 
>>> one). Feed all updates/inserts to both sites - one then is the 
>>> backup of the other, not only data-wise, but also can easily be 
>>> switched to by simple DNS redirect should the first site/provider go 
>>> down for any reason.  The second site can easily be interfered with 
>>> / copied from / backed up / whatever without affecting the service 
>>> to the public.
>>>
>>> I only do this with somewhat critical sites, but your use-case 
>>> sounds like it might benefit from it. My second choice would be to 
>>> simply stop operations at a best-case time-slot while the backup / 
>>> copy completes.
>>>
>>> Cheers,
>>> Ryan
>>>
>>>
>>>
>>> _______________________________________________
>>> sqlite-users mailing list
>>> sqlite-users at mailinglists.sqlite.org
>>> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>> _______________________________________________
>> sqlite-users mailing list
>> sqlite-users at mailinglists.sqlite.org
>> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
> _______________________________________________
> sqlite-users mailing list
> sqlite-users at mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to