I use stand-in clients that have the various NAS’s filesystems mounted with
full read permissions. I control the OS (RHEL or CentOS) for stability and
can use the EPEL packages for the client which are the latest version available
to the community. Is it the most efficient – maybe not, but it’s the most
expedient. Not only that, but if I replace the storage with something else, I
can get always get the data there. The NAS’s weakest link usually is the
controller. It doesn’t matter how HA the disks are if the controller goes bad.
Patti
From: Ian Douglas <i...@zti.co.za<mailto:i...@zti.co.za>>
Organization: Zero 2 Infinity
Reply-To: "i...@zti.co.za<mailto:i...@zti.co.za>"
<i...@zti.co.za<mailto:i...@zti.co.za>>
Date: Thursday, April 21, 2016 at 1:37 PM
To: "Clark, Patricia A." <clar...@ornl.gov<mailto:clar...@ornl.gov>>
Cc:
"bacula-users@lists.sourceforge.net<mailto:bacula-users@lists.sourceforge.net>"
<bacula-users@lists.sourceforge.net<mailto:bacula-users@lists.sourceforge.net>>,
Josh Fisher <jfis...@pvct.com<mailto:jfis...@pvct.com>>
Subject: Re: [Bacula-users] How to recover from lost connection
hi All
On Thursday 21 April 2016 17:00:14 Clark, Patti wrote:
If you are using spooling (recommended) your spool size parameters are used
to control the size of your job spool and the total spool space available.
The example below shows 50GB of job spool size and a total of 1TB of spool
space. You will need to try sizes that are appropriate for your
environment and see what performs best for you. The job spool size when
reached will write to tape at that point. It will continue with spooling
and writing to tape media until the end of data has been reached. Also for
something like a NAS, it’s best to break the backups into more manageable
chunks. Don’t try to backup the entire NAS in one job. And yes, use the
heartbeatinterval feature.
Maximum Spool Size = 1000GB;
Maximum Job Spool Size = 50GB;
Okay your explanation is clearer to me than what is in the manual. I was going
to use Maximum File Size as a way to "chunk" it, even though I was dubious
about it.
Will implement your suggestions, thanks.
You don’t mention which version of Bacula that you are using. If it is
7.4.x is has a resume command that will restart failed jobs roughly from
Versions vary, Director and FD on this machine (Gentoo) are the same, 7.05 but
the NAS boxes are running FreeNas and I had to compile a version for them in
VirtualBox, don't think the FreeBSD releases are at that level yet. Also was
trying to back up a site on the net running CentOS 6, which had it's own
issues. Eventually gave up, could not get the remote FD and local SD to talk
to each other, I think the problem is somewhere between my firewall (IPFire)
and the SD (CentOS 7 on HP server). So now I rsync remote to here and backup
from here. Initial rsync took 20 hours so it's probably better that way.
It would be nice if the various OSs could get their act together and get the
versions in sync :-)
Thanks, Ian
--
i...@zti.co.za<mailto:i...@zti.co.za> http://www.zti.co.za
Zero 2 Infinity - The net.works
Phone +27-21-975-7273
------------------------------------------------------------------------------
Find and fix application performance issues faster with Applications Manager
Applications Manager provides deep performance insights into multiple tiers of
your business applications. It resolves application problems quickly and
reduces your MTTR. Get your free trial!
https://ad.doubleclick.net/ddm/clk/302982198;130105516;z
_______________________________________________
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users