Re: [Bacula-users] Remote backup through NAT?
I had a lot of problems getting my setup to work over NAT too. If you want email me directly and I can provide my full configs/help out. I think what ended up fixing it for me was updating all of the Bacula components to 7.2.0. I had a real struggle trying to get it to work with 5.x too. Here is what I would recommend: - For consistency make sure you are running all Bacula 7.2.0 on all computers. (Not sure if this is possible for Microsoft Windows Clients) - On your firewall for the internal lan where your bacula server and storage daemon is. Open/Forward ports 9101-9103. - In your bacula server for the "client" definition, make sure the "Address" is that of the public IP, or hostname of the client server. Mine looks like this: # On the Bacula Server # Client { Name = web221.mydomain.com-fd Password = mypassword Address = web221.mydomain.com FDPort = 9102 Catalog = MyCatalog File Retention = 30 days Job Retention = 6 months TLS Enable = yes TLS Require = yes TLS Certificate = /etc/bacula/certs/web221.mydomain.com.crt TLS Key = /etc/bacula/certs/web221.mydomain.com-daemon.key TLS CA Certificate File = /etc/bacula/certs/cacert.pem AutoPrune = yes } - On the client's bacula-fd.conf mine looks like this: # On the Linux Client # Director { Name = bacula-dir Password = mypassword TLS Certificate = /etc/bacula/certs/web221.mydomain.com.crt TLS Key = /etc/bacula/certs/web221.mydomain.com-daemon.key TLS CA Certificate File = /etc/bacula/certs/cacert.pem TLS Enable = yes TLS Require = yes } FileDaemon { Name = web221.mydomain.com-fd FDport = 9102 WorkingDirectory = /var/spool/bacula Pid Directory = /var/run Maximum Concurrent Jobs = 20 # Plugin Directory = /usr/lib64/bacula TLS Enable = yes TLS Require = yes TLS Certificate = /etc/bacula/certs/web221.mydomain.com.crt TLS Key = /etc/bacula/certs/web221.mydomain.com-daemon.key TLS CA Certificate File = /etc/bacula/certs/cacert.pem PKI Signatures = Yes# Enable Data Signing PKI Encryption = Yes# Enable Data Encryption PKI Keypair = "/etc/bacula/bacula_disk_keys/fd-web221.mydomain.com.pem"# Public and Private Keys PKI Master Key = "/etc/bacula/bacula_disk_keys/master.cert"# ONLY the Public Key } -- Wesley Render, Consultant OtherData www.otherdata.com -- Site24x7 APM Insight: Get Deep Visibility into Application Performance APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month Monitor end-to-end web transactions and take corrective actions now Troubleshoot faster and improve end-user experience. Signup Now! http://pubads.g.doubleclick.net/gampad/clk?id=267308311=/4140 ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] Catalog Backup Job - Volume Retention Period?
Hi Ana, Thanks for the information on this. We don't have a requirement for 2, or 3. For this reason I have just set our catalog to backup to a weekly retention pool. Thanks! -- Wesley Render, Consultant OtherData -- ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
[Bacula-users] bconsole won't connect to director
Hi Tim, Do you have SELINUX Enforcing? Maybe check your /var/log/audit/audit.log for anything being blocked. Also, on our systems I found it easier to leave the Director Name to just = bacula-dir Do you have bacula1.example.com in your /etc/hosts file? I think it should look something like this: 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 bacula1 bacula1.example.com ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 -- Wesley Render, Consultant OtherData -- ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
[Bacula-users] Catalog Backup Job - Volume Retention Period?
I was just wondering what people would recommend for the retention period for the catalog backup job. For example should I set the catalog backup job to go to a volume pool with a retention period of 1 week? By default it looks like it is set to go to the default pool which is set to 365 days on my system. (I think this would get too large). Thanks! -- Wesley Render, Consultant OtherData -- Presto, an open source distributed SQL query engine for big data, initially developed by Facebook, enables you to easily query your data on Hadoop in a more interactive manner. Teradata is also now providing full enterprise support for Presto. Download a free open source copy now. http://pubads.g.doubleclick.net/gampad/clk?id=250295911=/4140 ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] Question about Volume Pools and Strategy
In case this helps anyone. I ended up having problems with setting volume limits. For example I suddenly decided to adjust our incremental backups to run every 4 hours, and started reaching volume limits with errors. From what I have read in the documentation setting "Volume Use Duration" will effectively cause new volumes to be created, and old ones to be recycled based on the Volume Retention. Since I monitor servers disk space and this is just disk based backup, if I get an alert that the storage space on the backup space is filling up I will then look at reducing the retention periods on the volume sets. Here is what I have so far: Pool { Name = office-p-monthly Pool Type = Backup Volume Retention = 6 months Recycle = yes AutoPrune = yes Action On Purge = Truncate LabelFormat = office-p-monthly- Volume Use Duration = 23h Maximum Volume Bytes = 100G } Pool { Name = office-p-weekly Pool Type = Backup Volume Retention = 1 months Recycle = yes AutoPrune = yes Action On Purge = Truncate LabelFormat = office-p-weekly- Volume Use Duration = 23h Maximum Volume Bytes = 100G } Pool { Name = office-p-daily Pool Type = Backup Volume Retention = 14 days Recycle = yes AutoPrune = yes Action On Purge = Truncate LabelFormat = office-p-daily- Volume Use Duration = 23h Maximum Volume Bytes = 100G } Pool { Name = datacenter-p-monthly Pool Type = Backup Volume Retention = 6 months Recycle = yes AutoPrune = yes Action On Purge = Truncate LabelFormat = datacenter-p-monthly- Volume Use Duration = 23h Maximum Volume Bytes = 100G } Pool { Name = datacenter-p-weekly Pool Type = Backup Volume Retention = 1 months Recycle = yes AutoPrune = yes Action On Purge = Truncate LabelFormat = datacenter-p-weekly- Volume Use Duration = 23h Maximum Volume Bytes = 100G } Pool { Name = datacenter-p-daily Pool Type = Backup Volume Retention = 14 days Recycle = yes AutoPrune = yes Action On Purge = Truncate LabelFormat = datacenter-p-daily- Volume Use Duration = 23h Maximum Volume Bytes = 100G } Here are samples of the jobs: Job { Name = web221-domainname Type = Backup Level = Incremental Client = web221.domainname.com-fd FileSet = OurFileSet Schedule = WeeklyCycle Storage = horde-sd Pool = Default Full Backup Pool = datacenter-p-monthly Incremental Backup Pool = datacenter-p-daily Differential Backup Pool = datacenter-p-weekly Accurate = Yes Messages = Standard } Job { Name = web220-domainname Type = Backup Level = Incremental Client = web220.domainname.com-fd FileSet = OurFileSet Schedule = WeeklyCycle Storage = office-sd Pool = Default Full Backup Pool = office-p-monthly Incremental Backup Pool = office-p-daily Differential Backup Pool = office-p-weekly Accurate = Yes Messages = Standard } -- Wesley Render, Consultant OtherData -- ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] Question about Volume Pools and Strategy
It seems to be working a lot better with different volume pools at each storage location. I have one server backing up to the server at our office, and 2 servers backing up to the datacenter. Here is what I have so far: Pool { Name = office-p-monthly Pool Type = Backup Volume Retention = 6 months Recycle = yes AutoPrune = yes LabelFormat = office-p-monthly- Maximum Volume Jobs = 1 Maximum Volumes = 9 } Pool { Name = office-p-weekly Pool Type = Backup Maximum Volume Jobs = 1 Volume Retention = 1 months Recycle = yes AutoPrune = yes LabelFormat = office-p-weekly- Maximum Volumes = 7 } Pool { Name = office-p-daily Pool Type = Backup Maximum Volume Jobs = 6 Volume Retention = 14 days Recycle = yes AutoPrune = yes LabelFormat = office-p-daily- Maximum Volumes = 6 } Pool { Name = datacenter-p-monthly Pool Type = Backup Volume Retention = 6 months Recycle = yes AutoPrune = yes LabelFormat = datacenter-p-monthly- Maximum Volume Jobs = 2 Maximum Volumes = 9 } Pool { Name = datacenter-p-weekly Pool Type = Backup Maximum Volume Jobs = 2 Volume Retention = 1 months Recycle = yes AutoPrune = yes LabelFormat = datacenter-p-weekly- Maximum Volumes = 7 } Pool { Name = datacenter-p-daily Pool Type = Backup Maximum Volume Jobs = 2 Volume Retention = 14 days Recycle = yes AutoPrune = yes LabelFormat = datacenter-p-daily- Maximum Volumes = 6 } Quoting Carlo Filippetto <carlo.filippe...@gmail.com>: > I think you have to use one set of pool for every storage. > I think that you can write volumes of the same pool into different storage, > the problem may arrive when you have to restore... > > Try to a restore job... > > > This is my Schedule: > Schedule { >Name = "Custom" >Run = Level=Full Storage=ST-data Pool=P-Monthly 1st sat at 21:30 >Run = Level=Differential Storage=ST-data Pool=P-Weekly 2nd-5th sat > at 21:30 >Run = Level=Incremental Storage=ST-data Pool=P-Daily sun-fri at > 22:00 > } > > As you can see I set the Storage and Pool into every line, you can change > it. > > Why do you like to use a single pool? > If you have 2 storage may be more clear and easy to find every single > volumes if you have different pools.. > > Bye > > > > > > > 2015-11-04 0:38 GMT+01:00 Wesley Render <wren...@otherdata.com>: > >> Should each storage daemon/geographic storage location have it's own >> set of Volume Pools? Or can I share one set of Volume Pools between >> all of the storage daemons/storage locations? >> >> I am using auto labelling as well and it works great. >> >> >> >> -- >> Wesley Render, Consultant >> OtherData >> >> >> ---------- >> ___ >> Bacula-users mailing list >> Bacula-users@lists.sourceforge.net >> https://lists.sourceforge.net/lists/listinfo/bacula-users >> -- Wesley Render, Consultant OtherData -- ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] Question about Volume Pools and Strategy
Ok. Thanks Josh. I've already created Pools for each storage location and done the initial full backups so I will most likely stick with this method. So far the backups appear to run a lot better using different pools for each storage location. Thanks, -- Wesley Render, Consultant OtherData -- ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
[Bacula-users] Question about Volume Pools and Strategy
I have recently started using Bacula and have a couple of questions regarding volume pools. I am using 7.2 version, and we have 4 Linux servers with a total of about 100GB of data to backup. 1. We have two different storage devices that are in different locations (because of bandwidth limitations). Should we be creating different Volume Pools for each storage location? I've tried testing using one pool, and two pools. When I use one pool and a backup job is run it displays an error "Marking Volume "Vol-0001" in Error in Catalog." and then it continues to run. When I setup two different volume pools it doesn't display this error. 2. I've noticed that some people recommend setting up different volume pools for Full, Differential and Incremental jobs. Is this still a recommended strategy for Bacula with backing up to disk, and if so when would someone use this strategy? The documentation here http://blog.bacula.org/whitepapers/CommunityDiskBackup.pdf doesn't mention this. I don't want to set things up, and then our volumes grow too large and have to re-do everything. Thank you, -- Wesley Render, Consultant OtherData -- ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] Question about Volume Pools and Strategy
Should each storage daemon/geographic storage location have it's own set of Volume Pools? Or can I share one set of Volume Pools between all of the storage daemons/storage locations? I am using auto labelling as well and it works great. -- Wesley Render, Consultant OtherData -- ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] Question about Volume Pools and Strategy
Thanks Carlo. This is very helpful. I also found this here which I missed before: http://www.bacula.org/7.0.x-manuals/en/main/Automated_Disk_Backup.html -- Wesley Render, Consultant OtherData -- ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] Question about Volume Pools and Strategy
Is anyone able to clarify question number 1? I should be all set after that. Thanks! -- Wesley Render, Consultant OtherData -- ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users