Re: [Off] Remote backup

2017-09-12 Thread Jeffrey Kain via 4D_Tech
Don't use 4D's code.  Read the tech note to understand the concepts, but that 
code is not good for use in the real world. 

And don't put your mirror backup code in a component either -- you'll want to 
prevent your mirror from starting a backup in the middle of an integration, and 
the only way to do this is with a semaphore. And you can't share a semaphore 
between the host and the component.

--
Jeffrey Kain
jeffrey.k...@gmail.com

> On Sep 12, 2017, at 4:01 PM, Benedict, Tom via 4D_Tech <4d_tech@lists.4d.com> 
> wrote:
> 
> there is a 4D Tech Note which includes all the code that you can need to set 
> this up. But it only supports file share or web service as the file transport 
> mechanism. Easy enough to do the LEP RoboCopy or whatever instead though.

**
4D Internet Users Group (4D iNUG)
FAQ:  http://lists.4d.com/faqnug.html
Archive:  http://lists.4d.com/archives.html
Options: http://lists.4d.com/mailman/options/4d_tech
Unsub:  mailto:4d_tech-unsubscr...@lists.4d.com
**

Re: [Off] Remote backup

2017-09-12 Thread Benedict, Tom via 4D_Tech
Bob Miller writes:

>[Server#1] Runs 4D Server that users are actively using.  Code in a loop on 
>the 4D Server calls
>"New Log File" every 10 minutes.  Code in the loop then calls LEP to a script 
>to transfer the last journal file to [Server#2].
>
>[Server#2] Runs 4D Server that users aren't aware of.  Code in a loop on the 
>4D Server detects
>that a complete journal file has arrived (as it will every ten minutes, my 
>uncertainty is how it knows the file transfer is
>complete) and integrates it using INTEGRATE MIRROR LOG FILE.  Code in the loop 
>then calls LEP
>to a script to transfer this journal file to [Server#3].  4D Backup runs on 
>this machine and sends its backup file to [Server#3]

>[Server#3] This is a repository, nothing is actively running on it, it could 
>be iCloud, Amazon S3, etc.

>Do I have this basically correct?

Yes.

>Then some technical questions:

>It would seem that Server#1 never runs 4D Backup, because any restoration
>would be done using Server#2.  Nevertheless, if Server#1 started up and
>somehow choked, a "normal" behavior would be to restore from backup and
>integrate the log file.  Since this behavior would be turned off, what is
>the procedure you use if Server #1 won't start?  Do you copy the mirror on
>Server #2 to Server#1?

Yes.

>How do you handle the timing, so that [Server#2] knows that the file
>transfer is complete and it is OK to integrate the log file?  Does
>[Server#1] somehow send a 'I'm Done' message to [Server#2] in your
>implementation?

I think that 4D will complain that the log file is 'busy'. I must confess I've 
never encountered this case in 10+ years of running multiple mirrors. If I 
recall correctly, our system copies the file with a 'temp' name, then when the 
copy is 'done' it changes the name. That limits the exposure considerably. It's 
pretty much an 'instant' name change, so the mirror never tries to integrate a 
'busy' file.

>What do you do if [Server#1] dies to now point users to [Server#2],
>assuming the Server application is a built application so that the server
>address and port number are built into the (built) 4D Client?

There are a number of ways to deal with this on the hostname/DNS level. We've 
never done this. Surprisingly, we never had any hardware failures when we used 
dedicated hardware and since moving to SAN/VM there have been a few drive 
failures, but they never affected operations due to the redundancy and 
'auto-move' features of VM.

BTW, there is a 4D Tech Note which includes all the code that you can need to 
set this up. But it only supports file share or web service as the file 
transport mechanism. Easy enough to do the LEP RoboCopy or whatever instead 
though.

Enterprise systems can't afford to not be mirrored. There are lots of white 
papers available which outline DR scenarios and potentials solutions for 
enterprise. Less than enterprise systems should seriously look at the very 
modest cost of deploying a 4D mirror. A few thousand dollar investment will 
provide a viable DR solution. Business interruptions due to natural disasters 
are very expensive to recover from. Being able to assure your business managers 
that the data that they depends on is safe, secure and available to deploy 
anywhere they need it is extremely compelling. Setting up a 4D Mirror is a 
simple, low cost way to satisfy that requirement.

I've advocated that 4D should build the mirroring capability into 4D Server, 
rather than providing a component. That would raise it to higher level of 
support and more systems would use mirroring. (Maybe they have? I haven't been 
following the v16 feature set closely, as I'm stuck on v13.x)

HTH,

Tom Benedict
Optum, Inc


This e-mail, including attachments, may include confidential and/or
proprietary information, and may be used only by the person or entity
to which it is addressed. If the reader of this e-mail is not the intended
recipient or his or her authorized agent, the reader is hereby notified
that any dissemination, distribution or copying of this e-mail is
prohibited. If you have received this e-mail in error, please notify the
sender by replying to this message and delete this e-mail immediately.
**
4D Internet Users Group (4D iNUG)
FAQ:  http://lists.4d.com/faqnug.html
Archive:  http://lists.4d.com/archives.html
Options: http://lists.4d.com/mailman/options/4d_tech
Unsub:  mailto:4d_tech-unsubscr...@lists.4d.com
**

Re: [Off] Remote backup

2017-09-12 Thread Jeffrey Kain via 4D_Tech
Yes - you have it correct.

> It would seem that Server#1 never runs 4D Backup, because any restoration 
> would be done using Server#2.  Nevertheless, if Server#1 started up and 
> somehow choked, a "normal" behavior would be to restore from backup and 
> integrate the log file.  Since this behavior would be turned off, what is 
> the procedure you use if Server #1 won't start?  Do you copy the mirror on 
> Server #2 to Server#1?

- Yes, you'll never back up server #1 and you'll disable the automatic restore 
after crash feature.

- If server 1 crashes and you bring it up, 4D analyzes its current journal file 
and if any operations are not integrated in the data file, they are 
automatically integrated.

- If server 1 won't start (i.e. damage detected or the journal won't integrate 
for some reason), you'll need a recovery procedure.

> How do you handle the timing, so that [Server#2] knows that the file 
> transfer is complete and it is OK to integrate the log file?  Does 
> [Server#1] somehow send a 'I'm Done' message to [Server#2] in your 
> implementation?

You could send a message from #1 to #2. We just use an old-fashioned lock file. 
Write the lock file, copy the journal file, then delete the lock file. You 
could also copy the file with a .tmp extension, and when done rename .tmp to 
.journal (and your code would filter out .tmp files to prevent them from 
integrating). 

In our server, we ship a new journal file every 30 seconds. They are numbered 
sequentially, so you can sort them by file name and integrate a batch in the 
correct order.

> What do you do if [Server#1] dies to now point users to [Server#2], 
> assuming the Server application is a built application so that the server 
> address and port number are built into the (built) 4D Client?

Change the DNS entry, or manually copy the data file from #2 back to #1. We do 
the latter. It happens so infrequently that 10 minutes to recover isn't a big 
deal.

Jeff

**
4D Internet Users Group (4D iNUG)
FAQ:  http://lists.4d.com/faqnug.html
Archive:  http://lists.4d.com/archives.html
Options: http://lists.4d.com/mailman/options/4d_tech
Unsub:  mailto:4d_tech-unsubscr...@lists.4d.com
**

[Off] Remote backup

2017-09-12 Thread bob.miller--- via 4D_Tech
Hi Jeff,

This is a great idea, I had not thought of it.  Let me repeat it back to 
see if I have it:

[Server#1] Runs 4D Server that users are actively using.  Code in a loop 
on the 4D Server calls "New Log File" every 10 minutes.  Code in the loop 
then calls LEP to a script to transfer the last journal file to 
[Server#2].

[Server#2] Runs 4D Server that users aren't aware of.  Code in a loop on 
the 4D Server detects that a complete journal file has arrived (as it will 
every ten minutes, my uncertainty is how it knows the file transfer is 
complete) and integrates it using INTEGRATE MIRROR LOG FILE.  Code in the 
loop then calls LEP to a script to transfer this journal file to 
[Server#3].  4D Backup runs on this machine and sends its backup file to 
[Server#3]

[Server#3] This is a repository, nothing is actively running on it, it 
could be iCloud, Amazon S3, etc.


Do I have this basically correct?

Then some technical questions:

It would seem that Server#1 never runs 4D Backup, because any restoration 
would be done using Server#2.  Nevertheless, if Server#1 started up and 
somehow choked, a "normal" behavior would be to restore from backup and 
integrate the log file.  Since this behavior would be turned off, what is 
the procedure you use if Server #1 won't start?  Do you copy the mirror on 
Server #2 to Server#1?

How do you handle the timing, so that [Server#2] knows that the file 
transfer is complete and it is OK to integrate the log file?  Does 
[Server#1] somehow send a 'I'm Done' message to [Server#2] in your 
implementation?

What do you do if [Server#1] dies to now point users to [Server#2], 
assuming the Server application is a built application so that the server 
address and port number are built into the (built) 4D Client?

Many thanks!

Bob Miller
Chomerics, a division of Parker Hannifin Corporation



**
4D Internet Users Group (4D iNUG)
FAQ:  http://lists.4d.com/faqnug.html
Archive:  http://lists.4d.com/archives.html
Options: http://lists.4d.com/mailman/options/4d_tech
Unsub:  mailto:4d_tech-unsubscr...@lists.4d.com
**

Re: [Off] Remote backup

2017-09-11 Thread Ronald Rosell via 4D_Tech
Thanks for the suggestion Jeff!

I looked into this, and it’s a bit of overkill for our situation (I’ll explain 
why) and I have come up with an approach that works for us, which I’ll describe 
below for those who are interested.

First, regarding the overkill, the main thing that’s being circumvented by 
using a mirror backup is the nightly brief pause while a full backup is done.  
Instead, the mirror setup involves sending log files every few minutes to the 
mirror server, which get integrated into that running database. Then the log is 
tossed and a new log is created, sent, and so on. So you end up with a fully 
replicated, running database (lagging only by the interval you’ve set for the 
journal files) which can be further backed up to the cloud by enabling full 
backups on the mirror site.

The nightly pause to do a full backup on our main server isn’t a big issue for 
us;  the three distinct databases on this server are mostly used during working 
hours.  One system, used for corporate training, is often used late at night 
but a lot of the time people are logged in they’re watching 30-minute long 
videos, so having database transactions pause for 20 seconds or so in the 
middle of the night is unlikely to be noticed even by people who are online.  

Also, the mirror-via-new-journals setup only works with 4D Server;  we’re 
running web licenses on regular 4D installs, and all user access is via 
browsers.  (You’ll never see me post on here about glitches in list boxes, etc. 
but I do write a lot of Javascript for the interface.)  We would need to be 
running at least two 4D Server licenses to do this, including a “running” 
instance on the backup site.

The problem I was running into yesterday, and the approach I’ve taken to get 
around it, is that cloud backup services (I tried a few) will copy the .journal 
log file during a scheduled backup but will ignore that file if you implement a 
Continuous Data Protection (CDP) plan.  So, the log file wasn't getting backed 
up remotely throughout the day.  The solution in our case involved creating a 
script that does the following (you can do this with shell scripts, Python, 
etc. … we took advantage of Automator since we’re running on MacOS):

1) Every ten minutes, our script duplicates the log file.  So, if your db is 
called mydatabase, you end up with something like mydatabase copy.journal 
alongside mydatabase.journal
2) The script then renames the duplicate Log file, changing the .journal to 
.txt.  That gives you mydatabase copy.txt.  That’s the secret to tricking the 
backup software into not seeing this as a temporary system file.
3) To keep from having lots of copies of the log stored locally, the script 
moves mydatabase copy.txt to a separate folder that’s being monitored for CDP, 
overwriting any previous version of mydatabase copy.txt that’s in there.  (We 
could also have left it in the main backup folder, and simply deleted the old 
mydatabase copy.txt before creating the new log duplicate.)
4) The CDP happily notices the .txt file and immediately uploads it to the 
remote storage site.   The service we’re using keeps multiple versions of files 
with the same name (mydatabase copy.txt) for 30 days, although we should only 
need the latest one, since it represents a full log since the last full backup.
5) Full backups take place around 3 AM Eastern time, and those are moved to the 
remote server right away.

The above is done for all three databases.

All static files (web pages, SSL certs, etc.) are replicated in advance on the 
remote disaster recovery server; we’re actually doing this in a partition on 
one of our video servers, which are mostly used to run Wowza video software. In 
the event that our main server goes offline for any extended period (i.e. 
hurricane, earthquake, etc.) all we need to do is fire up 4D instances on the 
disaster recovery system, input our licenses, and download the last full backup 
and last log from the remote backup.  Restore, integrate, and voila we’re back 
online.  

There are several domain names pointing to our IP (beyond the three distinct 
systems, different training customers like to have “vanity” URLs).  We only 
control the DNS for some of them.  For now, we recommend to our customers that 
when a hurricane is anticipated they shorten the time-to-live on their DNS 
records to 5 minutes;  that way, if we have to change IPs the effects will be 
picked up at the DNS level almost immediately.  I’m looking into whether 
there’s a way to do that part centrally, so that traffic to our IP is 
automatically rerouted to the backup site without changing individual DNS 
entries.

Ron
__

Ron Rosell
President
StreamLMS



> On Sep 11, 2017, at 7:26 AM, Jeffrey Kain via 4D_Tech <4d_tech@lists.4d.com> 
> wrote:
> 
> You should look into setting up a mirror backup. You could write code to 
> create a new journal file every 10 minutes (one line of code, not counting 
> the scheduling loop), and then a (.bat/py

Re: [Off] Remote backup

2017-09-11 Thread Benedict, Tom via 4D_Tech
Ronald Rosell writes:

>I'm looking for ways to improve my off-site backup strategy and was wondering
>if any of you found an online service that reliably backs up the log 
>(.journal) file on a high-frequency basis ... without disrupting that file.

You should look into setting up a 4D mirror. There is a tech note which 
includes code from a few years ago. It's built around the concept of "log 
shipping" where the .journal file is truncated periodically and 'shipped' to a 
'mirror' 4D server where it is integrated. That gives you a 'mirror' that is 
identical to the source. The key 4D commands are New log file and INTEGRATE LOG 
FILE.

We've been running a mirror (actually 3 for redundancy and maintenance) for 
years. We use a shared network folder to 'ship' the log files, but you can use 
other mechanisms such as Web Service or a cloud based share.

Let me know if you have any questions.

Tom Benedict
Optum Inc
__
This e-mail, including attachments, may include confidential and/or
proprietary information, and may be used only by the person or entity
to which it is addressed. If the reader of this e-mail is not the intended
recipient or his or her authorized agent, the reader is hereby notified
that any dissemination, distribution or copying of this e-mail is
prohibited. If you have received this e-mail in error, please notify the
sender by replying to this message and delete this e-mail immediately.
**
4D Internet Users Group (4D iNUG)
FAQ:  http://lists.4d.com/faqnug.html
Archive:  http://lists.4d.com/archives.html
Options: http://lists.4d.com/mailman/options/4d_tech
Unsub:  mailto:4d_tech-unsubscr...@lists.4d.com
**

Re: [Off] Remote backup

2017-09-11 Thread Jeffrey Kain via 4D_Tech
You should look into setting up a mirror backup. You could write code to create 
a new journal file every 10 minutes (one line of code, not counting the 
scheduling loop), and then a (.bat/python/vba) script to transfer that journal 
file to the mirror server. On the mirror, integrate the journal and then copy 
it to a journal archive folder, which gets backed up safely to a remote site, 
Amazon S3, wherever.

Schedule a nightly 4D backup to run on the mirror, and ship it off site as 
well, and you'll be able to completely recover to within 10 minutes of the 
disaster, even if your entire data center gets destroyed.

Jeff

--
Jeffrey Kain
jeffrey.k...@gmail.com




> On Sep 10, 2017, at 3:28 PM, Ronald Rosell via 4D_Tech <4d_tech@lists.4d.com> 
> wrote:
> 
> Ideally we’d like the log files to be backed up remotely far more frequently 
> than the once-a-day full backups.  Every ten minutes should do it.  That way, 
> in the event of a catastrophic failure we could use the previous night’s 
> backup and the log file to reconstruct almost all of our data (up to the 
> 10-minute window).  

**
4D Internet Users Group (4D iNUG)
FAQ:  http://lists.4d.com/faqnug.html
Archive:  http://lists.4d.com/archives.html
Options: http://lists.4d.com/mailman/options/4d_tech
Unsub:  mailto:4d_tech-unsubscr...@lists.4d.com
**

[Off] Remote backup

2017-09-11 Thread Ronald Rosell via 4D_Tech
Hi all,

I’m looking for ways to improve my off-site backup strategy and was wondering 
if any of you found an online service that reliably backs up the log (.journal) 
file on a high-frequency basis … without disrupting that file.

We have several servers (Macs) in two colocation data centers … one of which is 
in Florida, so you can imagine what’s on my mind.  Historically I’ve been using 
iBackup / iDrive to move our full backups and log files every night to "the 
cloud” (you know, where computers and angels live).  We also have local 
redundancy in the data centers (external RAID 1, or mirrored, arrays for the 
running database, with backups and the current log file on another drive that’s 
not part of the RAID array).  That protects us against the most common 
problems:  1) individual hard drive failure, 2) server failure, or 3) even 
failure of the RAID system.  We can quickly recover from all three of those.  
Catastrophic failure of the data center is the one thing that local backups 
don’t help with.  These data centers are telco bunkers that won’t blow away in 
a storm, but they could go offline for an extended period, hence the remote 
backups.

Ideally we’d like the log files to be backed up remotely far more frequently 
than the once-a-day full backups.  Every ten minutes should do it.  That way, 
in the event of a catastrophic failure we could use the previous night’s backup 
and the log file to reconstruct almost all of our data (up to the 10-minute 
window).   

I’ve tried iDrive’s “Continuous Data Protection” service, but it’s not working 
for this file and so far they can’t explain why.  I also tried Google’s new 
Backup & Sync app to sync the file to Google Drive, but similarly it’s not 
working; it doesn’t recognize the file as changing.  I suspect this is because 
these services may be relying upon the Finder’s modification date & time, which 
is also not updating as the system writes to the log (and the file grows).  
Haven’t tried iCloud for this yet, but that’s next.  

Has anyone come up with a mechanism for high-frequency remote backups of the 
logs that a) works and b) doesn’t somehow disrupt the log’s ability to receive 
new entries from the running database?
__

Ron Rosell
President
StreamLMS

301-3537 Oak Street
Vancouver, BC V6H 2M1
Canada

**
4D Internet Users Group (4D iNUG)
FAQ:  http://lists.4d.com/faqnug.html
Archive:  http://lists.4d.com/archives.html
Options: http://lists.4d.com/mailman/options/4d_tech
Unsub:  mailto:4d_tech-unsubscr...@lists.4d.com
**