On Tuesday, November 7, 2017 at 10:54:25 AM UTC+1, Tim Banchi wrote:
> On Monday, November 6, 2017 at 6:12:20 PM UTC+1, Jon SCHEWE wrote:
> > On 11/6/17 8:40 AM, Tim Banchi wrote:
> > > On Monday, October 30, 2017 at 3:00:29 PM UTC+1, Jon SCHEWE wrote:
> > >> You're setup looks very close to mine. I am doing the same thing that
> > >> you want to do. I have my other job called "offsite", but it's the same
> > >> idea. The scripts that are running before the jobs are to ensure the
> > >> appropriate USB drives are attached.
> > >>
> > >> I set both next pool and virtual full backup pool and that seems to work.
> > >>
> > >> JobDefs {
> > >>   Name = "AlwaysIncremental"
> > >>   Type = Backup
> > >>   Level = Incremental
> > >>   Schedule = "WeeklyCycle"
> > >>   Storage = File
> > >>   Messages = Standard
> > >>   Priority = 10
> > >>   Write Bootstrap = "/mnt/bareos-file/bootstrap/%c.bsr"
> > >>   Pool = AI-Incremental
> > >>   Full Backup Pool = AI-Consolidated                
> > >>
> > >>   Accurate = yes
> > >>   Always Incremental = yes
> > >>   Always Incremental Job Retention = 7 days
> > >>   Always Incremental Keep Number = 14
> > >>
> > >>   RunScript {
> > >>     RunsOnClient = no
> > >>     RunsWhen = Before
> > >>     FailJobOnError = yes
> > >>     Command = "/etc/bareos/check-local-backup-disk.sh"
> > >>   }
> > >> }
> > >>
> > >> JobDefs {
> > >>   Name = "DefaultJob"
> > >>   Type = Backup
> > >>   Level = Full
> > >>   Client = gemini-fd
> > >>   FileSet = "SelfTest"                     # selftest
> > >> fileset                            (#13)
> > >>   Schedule = "WeeklyCycle"
> > >>   Storage = File
> > >>   Messages = Standard
> > >>   Pool = Full
> > >>   Priority = 10
> > >>   Write Bootstrap = "/mnt/bareos-file/bootstrap/%c.bsr"
> > >>
> > >>   RunScript {
> > >>     RunsOnClient = no
> > >>     RunsWhen = Before
> > >>     FailJobOnError = yes
> > >>     Command = "/etc/bareos/check-local-backup-disk.sh"
> > >>   }
> > >>
> > >> }
> > >> JobDefs {
> > >>   Name = "OffsiteJob"
> > >>   Type = Backup
> > >>   Level = VirtualFull
> > >>   Client = gemini-fd
> > >>   FileSet = "SelfTest"                     # selftest
> > >> fileset                            (#13)
> > >>   Schedule = "OffsiteSchedule"
> > >>   Storage = Offsite
> > >>   Messages = Standard
> > >>   Pool = AI-Consolidated
> > >>   Incremental Backup Pool = AI-Incremental
> > >>   Next Pool = Offsite
> > >>   Virtual Full Backup Pool = Offsite
> > >>   Priority = 10
> > >>   Accurate = yes
> > >>   Write Bootstrap = "/mnt/bareos-file/bootstrap/%c.bsr"
> > >>
> > >>   RunScript {
> > >>     RunsOnClient = no
> > >>     RunsWhen = Before
> > >>     FailJobOnError = yes
> > >>     Command = "/etc/bareos/check-offsite-backup-disk.sh"
> > >>   }
> > >>   RunScript {
> > >>     console = "update jobid=%i jobtype=A"
> > >>     RunsOnClient = no
> > >>     RunsOnFailure = No
> > >>     RunsWhen = After
> > >>     FailJobOnError = yes
> > >>   }
> > >>  
> > >> }
> > >>
> > >> Job {
> > >>   Name = "backup-gemini-fd"
> > >>   JobDefs = "AlwaysIncremental"
> > >>   Client = "gemini-fd"
> > >>   FileSet = "gemini-all"
> > >>   ClientRunBeforeJob = "/etc/bareos/before-backup.sh"
> > >> }
> > >>
> > >> Job {
> > >>   Name = "offsite-gemini-fd"
> > >>   JobDefs = "OffsiteJob"
> > >>   Client = "gemini-fd"
> > >>   FileSet = "gemini-all"
> > >> }
> > >> Job {
> > >>   Client = gemini-fd
> > >>   Name = "Consolidate"
> > >>   Type = "Consolidate"
> > >>   Accurate = "yes"
> > >>   JobDefs = "DefaultJob"
> > >>   FileSet = "LinuxAll"
> > >>
> > >>   Max Full Consolidations = 1
> > >> }
> > >>
> > >> Storage {
> > >>   Name = File
> > >>   Address = gemini                # N.B. Use a fully qualified name here
> > >> (do not use "localhost" here).
> > >>   Password = "XXXXXXXXX"
> > >>   Device = FileStorage
> > >>   Media Type = File
> > >>
> > >>   # TLS setup
> > >>   ...
> > >> }
> > >>
> > >>
> > >> Storage {
> > >>   Name = Offsite
> > >>   Address = gemini                # N.B. Use a fully qualified name here
> > >> (do not use "localhost" here).
> > >>   Password = "XXXXXX"
> > >>   Device = OffsiteStorage
> > >>   Media Type = File
> > >>
> > >>   # TLS setup
> > >>   ...
> > >> }
> > >>
> > > Hi Jon,
> > >
> > > thank you, but unfortunately it doesn't work. Same problem as before.
> > >
> > > Two questions:
> > > 1) you use the same storage/device for both always incremental and for 
> > > consolidate. according to the manual (chapter 23.3.3) at least 2 storages 
> > > are needed). How does that work out in practice?
> > I'm not sure, it just seems to be working.
> > > 2) could you also post your pool configurations? I tried to configure 
> > > storage and next pool in the jobs, and comment them in the pools. I 
> > > thought this might problably work out better. But then I get the error 
> > > message: No Next Pool specification found in Pool "disk_ai_consolidate". 
> > > As soon as I add the next pool (being offsite/tape_automated), I get the 
> > > same error message again ...
> > >
> > 
> > Pool {
> >   Name = AI-Consolidated
> >   Pool Type = Backup
> >   Recycle = yes
> >   Auto Prune = yes
> >   Volume Retention = 360 days
> >   Maximum Volume Bytes = 50G
> >   Label Format = "AI-Consolidated-"
> >   Volume Use Duration = 23h
> >   Storage = File
> >   Action On Purge = Truncate
> >   Next Pool = Offsite
> > }
> > 
> > Pool {
> >   Name = AI-Incremental
> >   Pool Type = Backup
> >   Recycle = yes
> >   Auto Prune = yes
> >   Volume Retention = 360 days
> >   Maximum Volume Bytes = 50G
> >   Label Format = "AI-Incremental-"
> >   Volume Use Duration = 23h
> >   Storage = File
> >   Next Pool = AI-Consolidated
> >   Action On Purge = Truncate
> > }
> > 
> > Pool {
> >   Name = Full
> >   Pool Type = Backup
> >   Recycle = yes                       # Bareos can automatically recycle
> > Volumes
> >   AutoPrune = yes                     # Prune expired volumes
> >   Volume Retention = 365 days         # How long should the Full Backups
> > be kept? (#06)
> >   Maximum Volume Bytes = 50G          # Limit Volume size to something
> > reasonable
> >   Maximum Volumes = 100               # Limit number of Volumes in Pool
> >   Label Format = "Full-"              # Volumes will be labeled
> > "Full-<volume-id>"
> >   Storage = File
> >   Action On Purge = Truncate
> > }
> > 
> > Pool {
> >   Name = Offsite
> >   Pool Type = Backup
> >   Recycle = yes                       # Bareos can automatically recycle
> > Volumes
> >   Next Pool = Offsite
> >   AutoPrune = yes                     # Prune expired volumes
> >   Volume Retention = 60 days         # How long should the Full Backups
> > be kept? (#06)
> >   Maximum Volume Bytes = 50G          # Limit Volume size to something
> > reasonable
> >   Maximum Volumes = 100               # Limit number of Volumes in Pool
> >   Label Format = "Offsite-"              # Volumes will be labeled
> > "Full-<volume-id>"
> >   Storage = Offsite
> >   Action On Purge = Truncate
> > }
> > 
> 
> 
> Thank you. It still doesn't work, same error message. However, when changing 
> the storage daemon to the same machine where the consolidated + AI backups 
> are backed-up, it works! So I assume this is a bug, I will make a bug report.

Voila, bug report is here: https://bugs.bareos.org/view.php?id=874
> 
> 
> > Something that I've realized since I set this up is that I may need to
> > change the recycling of my offsite volumes to not reuse files and
> > instead delete them when they are recycled. This is because the drive
> > that is currently local may not have the file that bareos is looking for
> > to append to.
> > 
> Could this be solved with a shorter Volume Use Duration? 
> > 
> > -- 
> > Research Scientist
> > Raytheon BBN Technologies
> > 5775 Wayzata Blvd, Ste 630
> > Saint Louis Park, MN, 55416
> > Office: 952-545-5720

-- 
You received this message because you are subscribed to the Google Groups 
"bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to