On Tuesday, November 14, 2017 at 8:03:17 PM UTC+1, Anthony Melentev wrote:
> четверг, 26 октября 2017 г., 21:16:46 UTC+5 пользователь Tim Banchi написал:
> > Dear bareos-user,
> >
> > I'm at loss to create a working VirtualFull job after Always Incremental
> > and consolidate. I read the documentation and many forum posts here, but
> > the virtual-full job always picks the wrong storage. I should get different
> > a read storage and write storage.
> >
> > Always incremental and consolidate work as expected (two devices on one
> > storage daemon, I read the chapters concerning multiple storage devices and
> > concurrent disk jobs, so I think it's fine)
> >
> > My planned setup:
> > Always incremental and consolidate to local disk on bareos director server
> > (pavlov). A VirtualFull backup to tape on another server/storage daemon
> > (delaunay).
> >
> > I always get:
> > ...
> > 2017-10-26 17:24:13 pavlov-dir JobId 269: Start Virtual Backup JobId 269,
> > Job=pavlov_sys_ai_vf.2017-10-26_17.24.11_04
> > 2017-10-26 17:24:13 pavlov-dir JobId 269: Consolidating JobIds
> > 254,251,252,255,256,257
> > 2017-10-26 17:24:13 pavlov-dir JobId 269: Bootstrap records written to
> > /var/lib/bareos/pavlov-dir.restore.1.bsr
> > 2017-10-26 17:24:14 delaunay-sd JobId 269: Fatal error: Device reservation
> > failed for JobId=269:
> > 2017-10-26 17:24:14 pavlov-dir JobId 269: Fatal error:
> > Storage daemon didn't accept Device "pavlov-file-consolidate" because:
> > 3924 Device "pavlov-file-consolidate" not in SD Device resources or no
> > matching Media Type.
> > 2017-10-26 17:24:14 pavlov-dir JobId 269: Error: Bareos pavlov-dir 16.2.4
> > (01Jul16):
> > ...
> >
> > While a consolidate virtualfull job is successful:
> > ....
> > Using Catalog "MyCatalog"
> > 2017-10-26 13:51:39 pavlov-dir JobId 254: Start Virtual Backup JobId 254,
> > Job=pavlov_sys_ai.2017-10-26_13.51.37_40
> > 2017-10-26 13:51:39 pavlov-dir JobId 254: Consolidating JobIds
> > 248,245,246,250
> > 2017-10-26 13:51:39 pavlov-dir JobId 254: Bootstrap records written to
> > /var/lib/bareos/pavlov-dir.restore.4.bsr
> > 2017-10-26 13:51:40 pavlov-dir JobId 254: Using Device "pavlov-file" to
> > read.
> > 2017-10-26 13:51:40 pavlov-dir JobId 254: Using Device
> > "pavlov-file-consolidate" to write.
> > 2017-10-26 13:51:40 pavlov-sd JobId 254: Ready to read from volume
> > "ai_consolidate-0031" on device "pavlov-file" (/mnt/xyz).
> > 2017-10-26 13:51:40 pavlov-sd JobId 254: Volume "ai_consolidate-0023"
> > previously written, moving to end of data.
> > 2017-10-26 13:51:40 pavlov-sd JobId 254: Ready to append to end of Volume
> > "ai_consolidate-0023" size=7437114
> > 2017-10-26 13:51:40 pavlov-sd JobId 254: Forward spacing Volume
> > "ai_consolidate-0031" to file:block 0:215.
> > 2017-10-26 13:51:40 pavlov-sd JobId 254: End of Volume at file 0 on device
> > "pavlov-file" (/mnt/xyz), Volume "ai_consolidate-0031"
> > 2017-10-26 13:51:40 pavlov-sd JobId 254: Ready to read from volume
> > "ai_inc-0030" on device "pavlov-file" (/mnt/xyz).
> > 2017-10-26 13:51:40 pavlov-sd JobId 254: Forward spacing Volume
> > "ai_inc-0030" to file:block 0:1517550.
> > 2017-10-26 13:51:40 pavlov-sd JobId 254: Elapsed time=00:00:01, Transfer
> > rate=7.128 M Bytes/second
> > 2017-10-26 13:51:40 pavlov-dir JobId 254: Joblevel was set to joblevel of
> > first consolidated job: Full
> > 2017-10-26 13:51:41 pavlov-dir JobId 254: Bareos pavlov-dir 16.2.4
> > (01Jul16):
> > Build OS: x86_64-pc-linux-gnu ubuntu Ubuntu 16.04 LTS
> > JobId: 254
> > Job: pavlov_sys_ai.2017-10-26_13.51.37_40
> > Backup Level: Virtual Full
> > Client: "pavlov-fd" 16.2.4 (01Jul16)
> > x86_64-pc-linux-gnu,ubuntu,Ubuntu 16.04 LTS,xUbuntu_16.04,x86_64
> > FileSet: "linux_system" 2017-10-19 16:11:21
> > Pool: "disk_ai_consolidate" (From Job Pool's NextPool
> > resource)
> > Catalog: "MyCatalog" (From Client resource)
> > Storage: "pavlov-file-consolidate" (From Storage from
> > Pool's NextPool resource)
> > Scheduled time: 26-Oct-2017 13:51:37
> > Start time: 26-Oct-2017 13:48:10
> > End time: 26-Oct-2017 13:48:11
> > Elapsed time: 1 sec
> > Priority: 10
> > SD Files Written: 138
> > SD Bytes Written: 7,128,227 (7.128 MB)
> > Rate: 7128.2 KB/s
> > Volume name(s): ai_consolidate-0023
> > Volume Session Id: 18
> > Volume Session Time: 1509016221
> > Last Volume Bytes: 14,582,726 (14.58 MB)
> > SD Errors: 0
> > SD termination status: OK
> > Accurate: yes
> > Termination: Backup OK
> >
> > 2017-10-26 13:51:41 pavlov-dir JobId 254: purged JobIds 248,245,246,250 as
> > they were consolidated into Job 254
> > You have messages.
> > ....
> >
> >
> > I tried different things, adding, removing storage attribute from the jobs,
> > etc. I think I followed the examples in the manual and online, but helas,
> > the job never gets the correct read storage. AFAIK, the pool (and not the
> > jobs) should define the different storages, not the jobs. Also the Media
> > Type is different (File vs LTO), so the job should pick the right storage,
> > but ... it just does not.
> >
> > my configuration:
> >
> > A) director pavlov (to disk storage daemon + director)
> > 1) template for always incremental jobs
> > JobDefs {
> > Name = "default_ai"
> > Type = Backup
> > Level = Incremental
> > Client = pavlov-fd
> > Storage = pavlov-file
> > Messages = Standard
> > Priority = 10
> > Write Bootstrap = "|/usr/local/bin/bareos-messages.sh \"[Bootstrap] %d:
> > %j (jobid %i)\" %i \"it@XXXXX\" %c-%n"
> > Maximum Concurrent Jobs = 7
> >
> > #always incremental config
> > Pool = disk_ai
> > Incremental Backup Pool = disk_ai
> > Full Backup Pool = disk_ai_consolidate
> > Accurate = yes
> > Always Incremental = yes
> > Always Incremental Job Retention = 20 seconds #7 days
> > Always Incremental Keep Number = 2 #7
> > Always Incremental Max Full Age = 1 minutes # 14 days
> > }
> >
> >
> > 2) template for virtual full jobs, should run on read storage pavlov and
> > write storage delaunay:
> > job defs {
> > Name = "default_ai_vf"
> > Type = Backup
> > Level = VirtualFull
> > Messages = Standard
> > Priority = 13
> > Accurate = yes
> > Pool = disk_ai_consolidate
> >
> > #I tried different settings below, nothing worked
> > #Full Backup Pool = disk_ai_consolidate
> > #Virtual Full Backup Pool = tape_automated
> > #Incremental Backup Pool = disk_ai
> > #Next Pool = tape_automated
> > #Storage = delaunay_HP_G2_Autochanger
> > #Storage = pavlov-file
> >
> > # run after Consolidate
> > Run Script {
> > console = "update jobid=%i jobtype=A"
> > Runs When = After
> > Runs On Client = No
> > Runs On Failure = No
> > }
> >
> > Write Bootstrap = "|/usr/local/bin/bareos-messages.sh \"[Bootstrap] %d:
> > %j (jobid %i)\" %i \"it@XXXXX\" %c-%n"
> > }
> >
> > 3) consolidate job
> > Job {
> > Name = ai_consolidate
> > Type = Consolidate
> > Accurate = yes
> > Max Full Consolidations = 1
> > Client = pavlov-fd #value which should be ignored by
> > Consolidate job
> > FileSet = "none" #value which should be ignored by
> > Consolidate job
> > Pool = disk_ai_consolidate #value which should be ignored by
> > Consolidate job
> > Incremental Backup Pool = disk_ai_consolidate
> > Full Backup Pool = disk_ai_consolidate
> > # JobDefs = DefaultJob
> > # Level = Incremental
> > Schedule = "ai_consolidate"
> > # Storage = pavlov-file-consolidate #commented out for VirtualFull-Tape
> > testing
> > Messages = Standard
> > Priority = 10
> > Write Bootstrap = "|/usr/local/bin/bareos-messages.sh \"[Bootstrap] %d:
> > %j (jobid %i)\" %i \"it@XXXXXX\" %c-%n"
> > }
> >
> > 4) always incremental job for client pavlov (works)
> > Job {
> > Name = "pavlov_sys_ai"
> > JobDefs = "default_ai"
> > Client = "pavlov-fd"
> > FileSet = linux_system
> > Schedule = manual
> > }
> >
> >
> > 5) virtualfull job for pavlov (doesn't work)
> > Job {
> > Name = "pavlov_sys_ai_vf"
> > JobDefs = "default_ai_vf"
> > Client = "pavlov-fd"
> > FileSet = linux_system
> > Schedule = manual
> > #Storage = pavlov-file
> > #Next Pool = tape_automated #doesn't matter whether commented or not
> > }
> >
> > 6) pool always incremental
> > Pool {
> > Name = disk_ai
> > Pool Type = Backup
> > Recycle = yes # Bareos can automatically recycle
> > Volumes
> > AutoPrune = yes # Prune expired volumes
> > Volume Retention = 4 weeks
> > Maximum Volume Bytes = 30G # Limit Volume size to something
> > reasonable
> > Maximum Volumes = 200 # Limit number of Volumes in Pool
> > Label Format = "ai_inc-" # Volumes will be labeled
> > "Full-<volume-id>"
> > Volume Use Duration = 23h
> > Storage = pavlov-file
> > Next Pool = disk_ai_consolidate
> > }
> >
> > 7) pool always incremental consolidate
> > Pool {
> > Name = disk_ai_consolidate
> > Pool Type = Backup
> > Recycle = yes # Bareos can automatically recycle
> > Volumes
> > AutoPrune = yes # Prune expired volumes
> > Volume Retention = 4 weeks
> > Maximum Volume Bytes = 30G # Limit Volume size to something
> > reasonable
> > Maximum Volumes = 200 # Limit number of Volumes in Pool
> > Label Format = "ai_consolidate-" # Volumes will be labeled
> > "Full-<volume-id>"
> > Volume Use Duration = 23h
> > Storage = pavlov-file-consolidate
> > Next Pool = tape_automated
> > }
> >
> > 8) pool tape_automated (for virtualfull jobs to tape)
> > Pool {
> > Name = tape_automated
> > Pool Type = Backup
> > Storage = delaunay_HP_G2_Autochanger
> > Recycle = yes # Bareos can automatically recycle
> > Volumes
> > AutoPrune = yes # Prune expired volumes
> > Recycle Oldest Volume = yes
> > RecyclePool = Scratch
> > Maximum Volume Bytes = 0
> > Volume Retention = 4 weeks
> > Cleaning Prefix = "CLN"
> > Catalog Files = yes
> > }
> >
> > 9) 1st storage device for disk backup (writes always incremental jobs +
> > other normal jobs)
> > Storage {
> > Name = pavlov-file
> > Address = pavlov.XX # N.B. Use a fully qualified name here
> > (do not use "localhost" here).
> > Password = "X"
> > Maximum Concurrent Jobs = 1
> > Device = pavlov-file
> > Media Type = File
> > TLS Certificate = X
> > TLS Key = X
> > TLS CA Certificate File = X
> > TLS DH File = X
> > TLS Enable = X
> > TLS Require = X
> > TLS Verify Peer = X
> > TLS Allowed CN = pavlov.X
> > }
> >
> > 10) 2nd storage device for disk backup (consolidates AI jobs)
> > Storage {
> > Name = pavlov-file-consolidate
> > Address = pavlov.X # N.B. Use a fully qualified name here
> > (do not use "localhost" here).
> > Password = "X"
> > Maximum Concurrent Jobs = 1
> > Device = pavlov-file-consolidate
> > Media Type = File
> > TLS Certificate = X
> > TLS Key = X
> > TLS CA Certificate File = X
> > TLS DH File = X
> > TLS Enable = yes
> > TLS Require = yes
> > TLS Verify Peer = yes
> > TLS Allowed CN = pavlov.X
> > }
> >
> > 11) 3rd storage device for tape backup
> > Storage {
> > Name = delaunay_HP_G2_Autochanger
> > Address = "delaunay.XX"
> > Password = "X"
> > Device = "HP_G2_Autochanger"
> > Media Type = LTO
> > Autochanger = yes
> > TLS Certificate = X
> > TLS Key = X
> > TLS CA Certificate File = X
> > TLS DH File = X
> > TLS Enable = yes
> > TLS Require = yes
> > TLS Verify Peer = yes
> > TLS Allowed CN = delaunay.X
> > }
> >
> >
> > B) storage daemon pavlov (to disk)
> > 1) to disk storage daemon
> >
> > Storage {
> > Name = pavlov-sd
> > Maximum Concurrent Jobs = 20
> >
> > # remove comment from "Plugin Directory" to load plugins from specified
> > directory.
> > # if "Plugin Names" is defined, only the specified plugins will be loaded,
> > # otherwise all storage plugins (*-sd.so) from the "Plugin Directory".
> > #
> > # Plugin Directory = /usr/lib/bareos/plugins
> > # Plugin Names = ""
> > TLS Certificate = X
> > TLS Key = X
> > TLS CA Certificate File = X
> > TLS DH File = X
> > TLS Enable = yes
> > TLS Require = yes
> > TLS Verify Peer = yes
> > TLS Allowed CN = pavlov.X
> > TLS Allowed CN = edite.X
> > TLS Allowed CN = delaunay.X
> > }
> >
> > 2) to disk device (AI + others)
> > Device {
> > Name = pavlov-file
> > Media Type = File
> > Maximum Open Volumes = 1
> > Maximum Concurrent Jobs = 1
> > Archive Device = /mnt/xyz #(same for both)
> > LabelMedia = yes; # lets Bareos label unlabeled media
> > Random Access = yes;
> > AutomaticMount = yes; # when device opened, read it
> > RemovableMedia = no;
> > AlwaysOpen = no;
> > Description = "File device. A connecting Director must have the same Name
> > and MediaType."
> > }
> >
> > 3) consolidate to disk
> > Device {
> > Name = pavlov-file-consolidate
> > Media Type = File
> > Maximum Open Volumes = 1
> > Maximum Concurrent Jobs = 1
> > Archive Device = /mnt/xyz #(same for both)
> > LabelMedia = yes; # lets Bareos label unlabeled media
> > Random Access = yes;
> > AutomaticMount = yes; # when device opened, read it
> > RemovableMedia = no;
> > AlwaysOpen = no;
> > Description = "File device. A connecting Director must have the same Name
> > and MediaType."
> > }
> >
> > C) to tape storage daemon (different server)
> > 1) allowed director
> > Director {
> > Name = pavlov-dir
> > Password = "[md5]X"
> > Description = "Director, who is permitted to contact this storage daemon."
> > TLS Certificate = X
> > TLS Key = /X
> > TLS CA Certificate File = X
> > TLS DH File = X
> > TLS Enable = yes
> > TLS Require = yes
> > TLS Verify Peer = yes
> > TLS Allowed CN = pavlov.X
> > }
> >
> >
> > 2) storage daemon config
> > Storage {
> > Name = delaunay-sd
> > Maximum Concurrent Jobs = 20
> > Maximum Network Buffer Size = 32768
> > # Maximum Network Buffer Size = 65536
> >
> > # remove comment from "Plugin Directory" to load plugins from specified
> > directory.
> > # if "Plugin Names" is defined, only the specified plugins will be loaded,
> > # otherwise all storage plugins (*-sd.so) from the "Plugin Directory".
> > #
> > # Plugin Directory = /usr/lib/bareos/plugins
> > # Plugin Names = ""
> > TLS Certificate = X
> > TLS Key = X
> > TLS DH File = X
> > TLS CA Certificate File = X
> > TLS Enable = yes
> > TLS Require = yes
> > TLS Verify Peer = yes
> > TLS Allowed CN = pavlov.X
> > TLS Allowed CN = edite.X
> > }
> >
> >
> > 3) autochanger config
> > Autochanger {
> > Name = "HP_G2_Autochanger"
> > Device = Ultrium920
> > Changer Device = /dev/sg5
> > Changer Command = "/usr/lib/bareos/scripts/mtx-changer %c %o %S %a %d"
> > }
> >
> > 4) device config
> > Device {
> > Name = "Ultrium920"
> > Media Type = LTO
> > Archive Device = /dev/st2
> > Autochanger = yes
> > LabelMedia = no
> > AutomaticMount = yes
> > AlwaysOpen = yes
> > RemovableMedia = yes
> > Maximum Spool Size = 50G
> > Spool Directory = /var/lib/bareos/spool
> > Maximum Block Size = 2097152
> > # Maximum Block Size = 4194304
> > Maximum Network Buffer Size = 32768
> > Maximum File Size = 50G
> > }
>
> I've faced the same problem, and after some researching I've found this reply
> from developer:
> https://groups.google.com/d/msg/bareos-users/CKOO-Zd9CdE/D-thqZiyGFkJ
>
> So, it is not a bug for software, it's just not implemented. May be it worth
> to be mentioned in documentation.
Hi Anthony,
thanks a lot. It's not mentioned in documentation and I wonder what the reasons
are that this is not possible. I will update my bugreport.
--
You received this message because you are subscribed to the Google Groups
"bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
To post to this group, send email to [email protected].
For more options, visit https://groups.google.com/d/optout.