Thanks for the suggestion Jeff!

I looked into this, and it’s a bit of overkill for our situation (I’ll explain 
why) and I have come up with an approach that works for us, which I’ll describe 
below for those who are interested.

First, regarding the overkill, the main thing that’s being circumvented by 
using a mirror backup is the nightly brief pause while a full backup is done.  
Instead, the mirror setup involves sending log files every few minutes to the 
mirror server, which get integrated into that running database. Then the log is 
tossed and a new log is created, sent, and so on. So you end up with a fully 
replicated, running database (lagging only by the interval you’ve set for the 
journal files) which can be further backed up to the cloud by enabling full 
backups on the mirror site.

The nightly pause to do a full backup on our main server isn’t a big issue for 
us;  the three distinct databases on this server are mostly used during working 
hours.  One system, used for corporate training, is often used late at night 
but a lot of the time people are logged in they’re watching 30-minute long 
videos, so having database transactions pause for 20 seconds or so in the 
middle of the night is unlikely to be noticed even by people who are online.  

Also, the mirror-via-new-journals setup only works with 4D Server;  we’re 
running web licenses on regular 4D installs, and all user access is via 
browsers.  (You’ll never see me post on here about glitches in list boxes, etc. 
but I do write a lot of Javascript for the interface.)  We would need to be 
running at least two 4D Server licenses to do this, including a “running” 
instance on the backup site.

The problem I was running into yesterday, and the approach I’ve taken to get 
around it, is that cloud backup services (I tried a few) will copy the .journal 
log file during a scheduled backup but will ignore that file if you implement a 
Continuous Data Protection (CDP) plan.  So, the log file wasn't getting backed 
up remotely throughout the day.  The solution in our case involved creating a 
script that does the following (you can do this with shell scripts, Python, 
etc. … we took advantage of Automator since we’re running on MacOS):

1) Every ten minutes, our script duplicates the log file.  So, if your db is 
called mydatabase, you end up with something like mydatabase copy.journal 
alongside mydatabase.journal
2) The script then renames the duplicate Log file, changing the .journal to 
.txt.  That gives you mydatabase copy.txt.  That’s the secret to tricking the 
backup software into not seeing this as a temporary system file.
3) To keep from having lots of copies of the log stored locally, the script 
moves mydatabase copy.txt to a separate folder that’s being monitored for CDP, 
overwriting any previous version of mydatabase copy.txt that’s in there.  (We 
could also have left it in the main backup folder, and simply deleted the old 
mydatabase copy.txt before creating the new log duplicate.)
4) The CDP happily notices the .txt file and immediately uploads it to the 
remote storage site.   The service we’re using keeps multiple versions of files 
with the same name (mydatabase copy.txt) for 30 days, although we should only 
need the latest one, since it represents a full log since the last full backup.
5) Full backups take place around 3 AM Eastern time, and those are moved to the 
remote server right away.

The above is done for all three databases.

All static files (web pages, SSL certs, etc.) are replicated in advance on the 
remote disaster recovery server; we’re actually doing this in a partition on 
one of our video servers, which are mostly used to run Wowza video software. In 
the event that our main server goes offline for any extended period (i.e. 
hurricane, earthquake, etc.) all we need to do is fire up 4D instances on the 
disaster recovery system, input our licenses, and download the last full backup 
and last log from the remote backup.  Restore, integrate, and voila we’re back 
online.  

There are several domain names pointing to our IP (beyond the three distinct 
systems, different training customers like to have “vanity” URLs).  We only 
control the DNS for some of them.  For now, we recommend to our customers that 
when a hurricane is anticipated they shorten the time-to-live on their DNS 
records to 5 minutes;  that way, if we have to change IPs the effects will be 
picked up at the DNS level almost immediately.  I’m looking into whether 
there’s a way to do that part centrally, so that traffic to our IP is 
automatically rerouted to the backup site without changing individual DNS 
entries.

Ron
__

Ron Rosell
President
StreamLMS



> On Sep 11, 2017, at 7:26 AM, Jeffrey Kain via 4D_Tech <4d_tech@lists.4d.com> 
> wrote:
> 
> You should look into setting up a mirror backup. You could write code to 
> create a new journal file every 10 minutes (one line of code, not counting 
> the scheduling loop), and then a (.bat/python/vba) script to transfer that 
> journal file to the mirror server. On the mirror, integrate the journal and 
> then copy it to a journal archive folder, which gets backed up safely to a 
> remote site, Amazon S3, wherever.
> 
> Schedule a nightly 4D backup to run on the mirror, and ship it off site as 
> well, and you'll be able to completely recover to within 10 minutes of the 
> disaster, even if your entire data center gets destroyed.
> 
> Jeff
> 
> --
> Jeffrey Kain
> jeffrey.k...@gmail.com
> 
> 
> 
> 
>> On Sep 10, 2017, at 3:28 PM, Ronald Rosell via 4D_Tech 
>> <4d_tech@lists.4d.com> wrote:
>> 
>> Ideally we’d like the log files to be backed up remotely far more frequently 
>> than the once-a-day full backups.  Every ten minutes should do it.  That 
>> way, in the event of a catastrophic failure we could use the previous 
>> night’s backup and the log file to reconstruct almost all of our data (up to 
>> the 10-minute window).  
> 
> **********************************************************************
> 4D Internet Users Group (4D iNUG)
> FAQ:  http://lists.4d.com/faqnug.html
> Archive:  http://lists.4d.com/archives.html
> Options: http://lists.4d.com/mailman/options/4d_tech
> Unsub:  mailto:4d_tech-unsubscr...@lists.4d.com
> **********************************************************************

**********************************************************************
4D Internet Users Group (4D iNUG)
FAQ:  http://lists.4d.com/faqnug.html
Archive:  http://lists.4d.com/archives.html
Options: http://lists.4d.com/mailman/options/4d_tech
Unsub:  mailto:4d_tech-unsubscr...@lists.4d.com
**********************************************************************

Reply via email to