Hello Ziga,

Your client is probably too old for the 9.2.x Director.
Even CentOS 6 is old, most likely in the end of life.
Other than that you can try some tuning: 
http://www.bacula.lat/tuning-better-performance-and-treatment-of-backup-bottlenecks/?lang=en

Rgds.
--
MSc Heitor Faria
CEO Bacula LatAm
mobile1: + 1 909 655-8971
mobile2: + 55 61 98268-4220

América Latina
[ http://bacula.lat/]

-------- Original Message --------
From: Žiga Žvan <ziga.z...@cetrtapot.si>
Sent: Tuesday, October 6, 2020 03:11 AM
To: bacula-users@lists.sourceforge.net
Subject: [Bacula-users] performance&design&configuration challenges

>Hi,
>
>I'm having some performance challenges. I would appreciate some educated 
>guess from an experienced bacula user.
>
>I'm changing old backup sw that writes to tape drive with bacula 
>writing  to disk. The results are:
>a) windows file server backup from a deduplicated drive (1.700.000 
>files, 900 GB data, deduplicated space used 600 GB). *Bacula: 12 hours, 
>old software: 2.5 hours*
>b) linux file server backup (50.000 files, 166 GB data).*Bacula 3.5 
>hours, old software: 1 hour*.
>
>I have tried to:
>a) turn off compression&encryption. The result is the same: backup speed 
>around 13 MB/sec.
>b) change destination storage (from a new ibm storage attached over nfs, 
>to a local SSD disk attached on bacula server virtual machine). It took 
>2 hours 50 minutes to backup linux file server (instead of 3.5 hours). 
>Sequential write test tested with linux dd command shows write speed 300 
>MB/sec for IBM storage and 600 MB/sec for local SSD storage (far better 
>than actual throughput).
>
>The network bandwidth is 1 GB (1 GB on client, 10 GB on server) so I 
>guess this is not a problem; however I have noticed that bacula-fd on 
>client side uses 100% of CPU.
>
>I'm using:
>-bacula server version 9.6.5
>-bacula client version 5.2.13 (original from centos 6 repo).
>
>Any idea what is wrong and/or what performance should I expect?
>I would also appreciate some answers on the questions bellow.
>
>Kind regards,
>Ziga Zvan
>
>
>
>
>On 05.08.2020 10:52, Žiga Žvan wrote:
>>
>> Dear all,
>> I have tested bacula sw (9.6.5) and I must say I'm quite happy with 
>> the results (eg. compression, encryption, configureability). However I 
>> have some configuration/design questions I hope, you can help me with.
>>
>> Regarding job schedule, I would like to:
>> - create incremental daily backup (retention 1 week)
>> - create weekly full backup (retention 1 month)
>> - create monthly full backup (retention 1 year)
>>
>> I am using dummy cloud driver that writes to local file storage.  
>> Volume is a directory with fileparts. I would like to have seperate 
>> volumes/pools for each client. I would like to delete the data on disk 
>> after retention period expires. If possible, I would like to delete 
>> just the fileparts with expired backup.
>>
>> Questions:
>> a) At the moment, I'm using two backup job definitions per client and 
>> central schedule definition for all my clients. I have noticed that my 
>> incremental job gets promoted to full after monthly backup ("No prior 
>> Full backup Job record found"; because monthly backup is a seperate 
>> job, but bacula searches for full backups inside the same job). Could 
>> you please suggest a better configuration. If possible, I would like 
>> to keep central schedule definition (If I manipulate pools in a 
>> schedule resource, I would need to define them per client).
>>
>> b) I would like to delete expired backups on disk (and in the catalog 
>> as well). At the moment I'm using one volume in a daily/weekly/monthly 
>> pool per client. In a volume, there are fileparts belonging to expired 
>> backups (eg. part1-23 in the output bellow). I have tried to solve 
>> this with purge/prune scripts in my BackupCatalog job (as suggested in 
>> the whitepapers) but the data does not get deleted. Is there any way 
>> to delete fileparts? Should I create separate volumes after retention 
>> period? Please suggest a better configuration.
>>
>> c) Do I need a restore job for each client? I would just like to 
>> restore backup on the same client, default to /restore folder... When 
>> I use bconsole restore all command, the wizard asks me all the 
>> questions (eg. 5- last backup for a client, which client,fileset...) 
>> but at the end it asks for a restore job which changes all previously 
>> defined things (eg. client).
>>
>> d) At the moment, I have not implemented autochanger functionality. 
>> Clients compress/encrypt the data and send them to bacula server, 
>> which writes them on one central storage system. Jobs are processed in 
>> sequential order (one at a time). Do you expect any significant 
>> performance gain if i implement autochanger in order to have jobs run 
>> simultaneously?
>>
>> Relevant part of configuration attached bellow.
>>
>> Looking forward to move in the production...
>> Kind regards,
>> Ziga Zvan
>>
>>
>> *Volume example *(fileparts 1-23 should be deleted)*:*
>> [root@bacula cetrtapot-daily-vol-0022]# ls -ltr
>> total 0
>> -rw-r--r--. 1 bacula disk       262 Jul 28 23:05 part.1
>> -rw-r--r--. 1 bacula disk 999935988 Jul 28 23:06 part.2
>> -rw-r--r--. 1 bacula disk 999935992 Jul 28 23:07 part.3
>> -rw-r--r--. 1 bacula disk 999936000 Jul 28 23:08 part.4
>> -rw-r--r--. 1 bacula disk 999935981 Jul 28 23:09 part.5
>> -rw-r--r--. 1 bacula disk 328795126 Jul 28 23:10 part.6
>> -rw-r--r--. 1 bacula disk 999935988 Jul 29 23:09 part.7
>> -rw-r--r--. 1 bacula disk 999935995 Jul 29 23:10 part.8
>> -rw-r--r--. 1 bacula disk 999935981 Jul 29 23:11 part.9
>> -rw-r--r--. 1 bacula disk 999935992 Jul 29 23:12 part.10
>> -rw-r--r--. 1 bacula disk 453070890 Jul 29 23:12 part.11
>> -rw-r--r--. 1 bacula disk 999935995 Jul 30 23:09 part.12
>> -rw-r--r--. 1 bacula disk 999935993 Jul 30 23:10 part.13
>> -rw-r--r--. 1 bacula disk 999936000 Jul 30 23:11 part.14
>> -rw-r--r--. 1 bacula disk 999935984 Jul 30 23:12 part.15
>> -rw-r--r--. 1 bacula disk 580090514 Jul 30 23:13 part.16
>> -rw-r--r--. 1 bacula disk 999935994 Aug  3 23:09 part.17
>> -rw-r--r--. 1 bacula disk 999935936 Aug  3 23:12 part.18
>> -rw-r--r--. 1 bacula disk 999935971 Aug  3 23:13 part.19
>> -rw-r--r--. 1 bacula disk 999935984 Aug  3 23:14 part.20
>> -rw-r--r--. 1 bacula disk 999935973 Aug  3 23:15 part.21
>> -rw-r--r--. 1 bacula disk 999935977 Aug  3 23:17 part.22
>> -rw-r--r--. 1 bacula disk 108461297 Aug  3 23:17 part.23
>> -rw-r--r--. 1 bacula disk 999935974 Aug  4 23:09 part.24
>> -rw-r--r--. 1 bacula disk 999935987 Aug  4 23:10 part.25
>> -rw-r--r--. 1 bacula disk 999935971 Aug  4 23:11 part.26
>> -rw-r--r--. 1 bacula disk 999936000 Aug  4 23:12 part.27
>> -rw-r--r--. 1 bacula disk 398437855 Aug  4 23:12 part.28
>>
>> *Cache (deleted as expected):*
>>
>> [root@bacula cetrtapot-daily-vol-0022]# ls -ltr 
>> /mnt/backup_bacula/cloudcache/cetrtapot-daily-vol-0022/
>> total 4
>> -rw-r-----. 1 bacula disk 262 Jul 28 23:05 part.1
>>
>> *Relevant part of central configuration*
>>
>> # Backup the catalog database (after the nightly save)
>> Job {
>>   Name = "BackupCatalog"
>>   JobDefs = "CatalogJob"
>>   Level = Full
>>   FileSet="Catalog"
>>   Schedule = "WeeklyCycleAfterBackup"
>>   RunBeforeJob = "/opt/bacula/scripts/make_catalog_backup.pl MyCatalog"
>>   # This deletes the copy of the catalog
>>   RunAfterJob  = "/opt/bacula/scripts/delete_catalog_backup"
>>   #Prune
>>   RunScript {
>>     Console = "prune expired volume yes"
>>     RunsWhen = Before
>>     RunsOnClient= No
>>   }
>>   #Purge
>>   RunScript {
>>     RunsWhen=After
>>     RunsOnClient=No
>>     Console = "purge volume action=all allpools 
>> storage=FSOciCloudStandard"
>>   }
>>   Write Bootstrap = "/opt/bacula/working/%n.bsr"
>>   Priority = 11                   # run after main backup
>> }
>>
>> Schedule {
>>   Name = "WeeklyCycle"
>>   Run = Full 2nd-5th fri at 23:05
>>   Run = Incremental mon-thu at 23:05
>> }
>>
>> Schedule {
>>   Name = "MonthlyFull"
>>   Run = Full 1st fri at 23:05
>> }
>>
>> # This schedule does the catalog. It starts after the WeeklyCycle
>> Schedule {
>>   Name = "WeeklyCycleAfterBackup"
>>   Run = Full sun-sat at 23:10
>> }
>>
>>
>>
>> *Configuration specific to each client*
>> Client {
>>   Name = oradev02.kranj.cetrtapot.si-fd
>>   Address = oradev02.kranj.cetrtapot.si    #IP or fqdn
>>   FDPort = 9102
>>   Catalog = MyCatalog
>>   Password = "something"          # password for FileDaemon: will be 
>> match on client side
>>   File Retention = 60 days            # 60 days
>>   Job Retention = 6 months            # six months
>>   AutoPrune = yes                     # Prune expired Jobs/Files
>> }
>>
>> ##Job for backup ##
>> JobDefs {
>>   Name = "oradev02-job"
>>   Type = Backup
>>   Level = Incremental
>>   Client = oradev02.kranj.cetrtapot.si-fd #Client names: will be match 
>> on bacula-fd.conf on client side
>>   FileSet = "oradev02-fileset"
>>   Schedule = "WeeklyCycle" #schedule : see in bacula-dir.conf
>> #  Storage = FSDedup
>>   Storage = FSOciCloudStandard
>>   Messages = Standard
>>   Pool = oradev02-daily-pool
>>   SpoolAttributes = yes                   # Better for backup to disk
>>   Max Full Interval = 15 days             # Ensure that full backup exist
>>   Priority = 10
>>   Write Bootstrap = "/opt/bacula/working/%c.bsr"
>> }
>>
>> Job {
>>   Name = "oradev02-backup"
>>   JobDefs = "oradev02-job"
>>   Full Backup Pool = oradev02-weekly-pool
>>   Incremental Backup Pool = oradev02-daily-pool
>> }
>>
>> Job {
>>   Name = "oradev02-monthly-backup"
>>   JobDefs = "oradev02-job"
>>   Pool = oradev02-monthly-pool
>>   Schedule = "MonthlyFull"  #schedule : see in bacula-dir.conf 
>> (monthly pool with longer retention)
>> }
>>
>> ## Job for restore ##
>> Job {
>>   Name = "oradev02-restore"
>>   Type = Restore
>>   Client=oradev02.kranj.cetrtapot.si-fd
>>   Storage = FSOciCloudStandard
>> # The FileSet and Pool directives are not used by Restore Jobs
>> # but must not be removed
>>   FileSet="oradev02-fileset"
>>   Pool = oradev02-weekly-pool
>>   Messages = Standard
>>   Where = /restore
>> }
>>
>> FileSet {
>>   Name = "oradev02-fileset"
>>   Include {
>>     Options {
>>       signature = MD5
>>       compression = GZIP
>>     }
>>  #   File = "D:/projekti"   #Windows example
>>  #   File = /zz            #Linux example
>>      File = /backup/export
>>   }
>>
>> ## Exclude  ##
>>   Exclude {
>>     File = /opt/bacula/working
>>     File = /tmp
>>     File = /proc
>>     File = /tmp
>>     File = /sys
>>     File = /.journal
>>     File = /.fsck
>>   }
>> }
>>
>> Pool {
>>   Name = oradev02-monthly-pool
>>   Pool Type = Backup
>>   Recycle = yes                       # Bacula can automatically 
>> recycle Volumes
>>   AutoPrune = no                      # Prune expired volumes (catalog 
>> job handles this)
>>   Action On Purge = Truncate          # Allow to volume truncation
>>   #Volume Use Duration = 14h           # Create new volume for each backup
>>   Volume Retention = 365 days         # one year
>>   Maximum Volume Bytes = 50G          # Limit Volume size to something 
>> reasonable
>>   Maximum Volumes = 100               # Limit number of Volumes in Pool
>>   Label Format = "oradev02-monthly-vol-"     # Auto label
>>   Cache Retention = 1 days            # Cloud specific (delete local 
>> cache after one day)
>> }
>>
>>
>> Pool {
>>   Name = oradev02-weekly-pool
>>   Pool Type = Backup
>>   Recycle = yes                       # Bacula can automatically 
>> recycle Volumes
>>   AutoPrune = no                      # Prune expired volumes (catalog 
>> job handles this)
>>   Action On Purge = Truncate          # Allow to volume truncation
>>   #Volume Use Duration = 14h           # Create new volume for each backup
>>   Volume Retention = 35 days          # one month
>>   Maximum Volume Bytes = 50G          # Limit Volume size to something 
>> reasonable
>>   Maximum Volumes = 100               # Limit number of Volumes in Pool
>>   Label Format = "oradev02-weekly-vol-"     # Auto label
>>   Cache Retention = 1 days            # Cloud specific (delete local 
>> cache after one day)
>> }
>>
>> Pool {
>>   Name = oradev02-daily-pool
>>   Pool Type = Backup
>>   Recycle = yes                       # Bacula can automatically 
>> recycle Volumes
>>   AutoPrune = no                      # Prune expired volumes (catalog 
>> job handles this)
>>   Action On Purge = Truncate          # Allow to volume truncation
>>   #Volume Use Duration = 14h           # Create new volume for each backup
>>   Volume Retention = 1 days           # one week (for testing 
>> purposes, after that change to 5)
>>   Maximum Volume Bytes = 50G          # Limit Volume size to something 
>> reasonable
>>   Maximum Volumes = 100               # Limit number of Volumes in Pool
>>   Label Format = "oradev02-daily-vol-"     # Auto label
>>   Cache Retention = 1 days            # Cloud specific (delete local 
>> cache after one day)
>> }
>>
>>
>
>
>_______________________________________________
>Bacula-users mailing list
>Bacula-users@lists.sourceforge.net
>https://lists.sourceforge.net/lists/listinfo/bacula-users
_______________________________________________
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users

Reply via email to