I want to use concurrent jobs for backing up slow roaming clients during the 
day with spool directories.  This works fine.  They take a long time, but they 
each write to their own spool.

My challange is we have on disk volumes and a single tape drive where jobs are 
copied to over time.

When a consolidate job runs two are started at once and one will always fail 
when swapping to the tape drive that’s always busy and fail rather than block 
on that tape drive.  I already have that device set to only 1 concurrent job. 
For jobs like copy jobs they work as one would expect that jobs go serial 
because they wait for the drive,  it’s consolidate jobs that when reading from 
multiple pools (disk or tape copy pool)  it doesn’t realize this, but then 
doesn’t pause when it needs to swap to a different media type.

Job log below:

02-Dec 22:06 myth-dir JobId 19285: Start Virtual Backup JobId 19285, 
Job=sch-hp-desktop-Users-No-Pictures.2019-12-02_22.06.40_50
02-Dec 22:06 myth-dir JobId 19285: Consolidating JobIds 
18971,18637,18647,18497,18542,18655,18665,18720,18757,18793,18830,18873,18891,18901
02-Dec 22:06 myth-dir JobId 19285: Bootstrap records written to 
/var/lib/bareos/myth-dir.restore.57.bsr
02-Dec 22:06 myth-dir JobId 19285: Connected Storage daemon at <snip>:9103, 
encryption: TLS_CHACHA20_POLY1305_SHA256
02-Dec 22:06 myth-dir JobId 19285: Using Device "FileStorage" to read.
02-Dec 22:06 myth-dir JobId 19285: Using Device "FileStorage3" to write.
02-Dec 22:06 myth-sd JobId 19285: Ready to read from volume 
"AI-Consolidated-1775" on device "FileStorage" (/mnt/bacula).
02-Dec 22:06 myth-sd JobId 19285: Recycled volume "AI-Consolidated-1591" on 
device "FileStorage3" (/mnt/bacula), all previous data lost.
02-Dec 22:06 myth-sd JobId 19285: Spooling data ...
02-Dec 22:06 myth-sd JobId 19285: Forward spacing Volume "AI-Consolidated-1775" 
to file:block 0:3001281070.
02-Dec 22:07 myth-sd JobId 19285: End of Volume at file 1 on device 
"FileStorage" (/mnt/bacula), Volume "AI-Consolidated-1775"
02-Dec 22:07 myth-sd JobId 19285: stored/acquire.cc:151 Changing read device. 
Want Media Type="LTO5" have="File"
 device="FileStorage" (/mnt/bacula)
02-Dec 22:07 myth-sd JobId 19285: Releasing device "FileStorage" (/mnt/bacula).
02-Dec 22:07 myth-sd JobId 19285: Fatal error: stored/acquire.cc:205 No 
suitable device found to read Volume "DA6512L5"
02-Dec 22:07 myth-sd JobId 19285: Fatal error: stored/mount.cc:965 Cannot open 
Dev="FileStorage" (/mnt/bacula), Vol=DA6512L5
02-Dec 22:07 myth-sd JobId 19285: End of all volumes.
02-Dec 22:07 myth-sd JobId 19285: Fatal error: stored/mac.cc:761 Fatal append 
error on device "FileStorage3" (/mnt/bacula): ERR=
02-Dec 22:07 myth-sd JobId 19285: Elapsed time=00:00:12, Transfer rate=81.05 M 
Bytes/second
02-Dec 22:07 myth-sd JobId 19285: Releasing device "FileStorage3" (/mnt/bacula).
02-Dec 22:07 myth-sd JobId 19285: Releasing device "Tand-LTO5" (/dev/nst1).
02-Dec 22:07 myth-dir JobId 19285: Error: Bareos myth-dir 18.2.5 (30Jan19):


Thoughts?  Is this a current limitation, or a configuration error on my part?   
If I force only one job at the director level (then lose my ability to backup 
multiple clients at once) everything works just fine. 


Brock Palen
1 (989) 277-6075
bro...@mlds-networks.com
www.mlds-networks.com
Websites, Linux, Hosting, Joomla, Consulting



-- 
You received this message because you are subscribed to the Google Groups 
"bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to bareos-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/bareos-users/2D539C7A-C6E4-42C5-9313-1B54AF6274C1%40mlds-networks.com.

Reply via email to