Re: [Bacula-users] multiple concurrent tape jobs

2016-02-14 Thread Dan Langille
> On Feb 8, 2016, at 5:42 PM, Dan Langille  wrote:
> 
> Hello,
> 
> I am working with an LTO-4 tape library.  It has two drives but I plan to 
> write to only one for backups.
> 
> I will backup to disk first, on another SD.  Later, I will copy the jobs to 
> the tape library on this new SD
> which is on another server.  The copy jobs will be spooled to local SSD 
> before being written to tape.
> 
>re http://bacula-users.narkive.com/QRkTVfEz/typical-tape-write-performance
> 
> Now what I'm thinking of is streaming multiple concurrent jobs to a single 
> drive.
> 
> Sure, downside on restore is interleaving of blocks
> 
> I don't see any downsides to going down this path.  I have yet to run any 
> copy jobs to the new library,
> but it may be ready this week.
> 
> Comments?

There have been good suggestions, but I'm posting below my original email to 
maintain context.

I was just reading: 
http://www.bacula.org/7.4.x-manuals/en/main/Data_Spooling.html

Main points:

With concurrent tape jobs, only one job will write to tape at a time:

• When Bacula begins despooling data spooled to disk, it takes exclusive use of 
the tape. This has the major advantage that in running multiple simultaneous 
jobs at the same time, the blocks of several jobs will not be intermingled.

Multiple jobs can spool concurrently:

• If you are running multiple simultaneous jobs, Bacula will continue spooling 
other jobs while one is despooling to tape, provided there is sufficient spool 
file space.

I ran my first copy job this morning.

* 18 minutes to copy 7.075 GB from one SD to the other, spooling onto SSD.
* 7 minutes to spool 577,884,981 bytes back to the Director.

14-Feb 16:30 bacula-dir JobId 231106: Warning: FileSet MD5 digest not found.
14-Feb 16:30 bacula-dir JobId 231106: The following 1 JobId was chosen to be 
copied: 225267
14-Feb 16:30 bacula-dir JobId 231106: Copying using JobId=225267 
Job=supernews_FP_msgs.2015-12-06_03.05.35_25
14-Feb 16:30 bacula-dir JobId 231106: Bootstrap records written to 
/usr/local/bacula/working/bacula-dir.restore.26.bsr
14-Feb 16:30 bacula-dir JobId 231106: Start Copying JobId 231106, 
Job=CopyToTape-Full-Just-One-tape-01.2016-02-14_16.30.41_38
14-Feb 16:30 bacula-dir JobId 231106: Using Device "vDrive-0" to read.
14-Feb 16:30 bacula-dir JobId 231107: Using Device "LTO_0" to write.
14-Feb 16:30 crey-sd JobId 231106: Ready to read from volume "FullAuto-3337" on 
file device "vDrive-0" (/usr/local/bacula/volumes).
14-Feb 16:30 crey-sd JobId 231106: Forward spacing Volume "FullAuto-3337" to 
file:block 0:1957810277.
14-Feb 16:31 crey-sd JobId 231106: End of Volume at file 1 on device "vDrive-0" 
(/usr/local/bacula/volumes), Volume "FullAuto-3337"
14-Feb 16:31 crey-sd JobId 231106: Ready to read from volume "FullAuto-3340" on 
file device "vDrive-0" (/usr/local/bacula/volumes).
14-Feb 16:31 crey-sd JobId 231106: Forward spacing Volume "FullAuto-3340" to 
file:block 0:64728.
14-Feb 16:31 crey-sd JobId 231106: End of Volume at file 1 on device "vDrive-0" 
(/usr/local/bacula/volumes), Volume "FullAuto-3340"
14-Feb 16:31 crey-sd JobId 231106: Ready to read from volume "FullAuto-3341" on 
file device "vDrive-0" (/usr/local/bacula/volumes).
14-Feb 16:31 crey-sd JobId 231106: Forward spacing Volume "FullAuto-3341" to 
file:block 0:216.
14-Feb 16:33 crey-sd JobId 231106: End of Volume at file 1 on device "vDrive-0" 
(/usr/local/bacula/volumes), Volume "FullAuto-3341"
14-Feb 16:33 crey-sd JobId 231106: Ready to read from volume "FullAuto-3351" on 
file device "vDrive-0" (/usr/local/bacula/volumes).
14-Feb 16:33 crey-sd JobId 231106: Forward spacing Volume "FullAuto-3351" to 
file:block 0:64728.
14-Feb 16:34 crey-sd JobId 231106: End of Volume at file 1 on device "vDrive-0" 
(/usr/local/bacula/volumes), Volume "FullAuto-3351"
14-Feb 16:34 crey-sd JobId 231106: Ready to read from volume "FullAuto-3357" on 
file device "vDrive-0" (/usr/local/bacula/volumes).
14-Feb 16:34 crey-sd JobId 231106: Forward spacing Volume "FullAuto-3357" to 
file:block 0:216.
14-Feb 16:36 crey-sd JobId 231106: End of Volume at file 1 on device "vDrive-0" 
(/usr/local/bacula/volumes), Volume "FullAuto-3357"
14-Feb 16:36 crey-sd JobId 231106: Ready to read from volume "FullAuto-3370" on 
file device "vDrive-0" (/usr/local/bacula/volumes).
14-Feb 16:36 crey-sd JobId 231106: Forward spacing Volume "FullAuto-3370" to 
file:block 0:64728.
14-Feb 16:37 crey-sd JobId 231106: End of Volume at file 1 on device "vDrive-0" 
(/usr/local/bacula/volumes), Volume "FullAuto-3370"
14-Feb 16:37 crey-sd JobId 231106: Ready to read from volume "FullAuto-3373" on 
file device "vDrive-0" (/usr/local/bacula/volumes).
14-Feb 16:37 crey-sd JobId 231106: Forward spacing Volume "FullAuto-3373" to 
file:block 0:64728.
14-Feb 16:38 crey-sd JobId 231106: End of Volume at file 1 on device "vDrive-0" 
(/usr/local/bacula/volumes), Volume "FullAuto-3373"
14-Feb 16:38 crey-sd JobId 231106: Ready to read from volume "FullAuto-3382" 

Re: [Bacula-users] multiple concurrent tape jobs

2016-02-11 Thread Dan Langille
> On Feb 9, 2016, at 3:44 AM, Ana Emília M. Arruda  
> wrote:
> 
> Hello Heitor and Dan,
> 
> When a Job is despooling (disk->tape), the file daemon will wait. It will 
> just begin spooling again (if necessary, i.e., amount of space that can be 
> used by the job in the spool area is less then the amount of data that will 
> be backed up for this client). The others file daemons will be spooling to 
> disk.
> 
> So IMHO the more large the spool area you could have for a job, the minimum 
> interleaving you will have.
> 
> It will depend also if you have jobs with very diversified amounts of backup 
> data (total backup size per job). I would choose the average value (of the 
> total backup size for each job) for the Maximum Spool Size, if I did not had 
> enough space in disk to choose the highest value (the highest total backup 
> size that a job could have), to minimize data interleaving.
> 
> In any way, you will speed up your backups since the network delay for the 
> data travels from client to the storage is greater than the transfer speeds 
> from disk to tape (supposing your spool area data is not traveling through 
> network).

In my case, this will be an SD to SD copy job.  No FD involved. I backup to 
disk first.  Then I use copy jobs to move from the disk backup on server A to 
the tape backup on server B. Server B has 500GB of SSD (in a mirror).--
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=272487151=/4140___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] multiple concurrent tape jobs

2016-02-11 Thread compdoc
> Now what I'm thinking of is streaming multiple concurrent jobs to a single
drive

 

I do that. On my lan, I need to backup config files and mysql, etc on a few
centos and ubuntu servers. And also my win7pro computer which has large
ssds, and which takes hours to backup to an LTO4 drive on my 1G network. 

 

Trying to schedule all that is a nightmare, so I just do it all at once. The
servers are only a few gigs each, and without spooling they are interleaved
with the Windows backup. But that's fine because they are located at the
beginning of the tape, and so searching for server files doesn't take very
long. 

 

Without spooling, the tape drive runs continuously with only
occasional/brief stops or changes in speed. With spooling, the drive sits
idle until the cache fills, then runs until the cache is empty, then waits
again. It's a 2.5 hour backup without, and it's a 4+ hour backup with
spooling, if I recall. At least that's how it works for me. 

 

Feature request might be a threaded cache? Not sure if that's a correct way
to describe it... deleting files as are they are recorded and fetching new
files to fill in the background. 

 

Anyway, I think it works great as is, without spooling. Best to test for
yourself. 

 

Good luck.

 

--
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=272487151=/4140___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] multiple concurrent tape jobs

2016-02-09 Thread Ana Emília M . Arruda
If you have very slow network file transfers from your file daemons to your
disk spool area, it would be better to have a not too large data spooling
area. Because despooling would be unnecessary delayed waiting a job to
reach the total amount of spool area dedicated to it.

Best regards,
Ana

On Tue, Feb 9, 2016 at 9:44 AM, Ana Emília M. Arruda  wrote:

> Hello Heitor and Dan,
>
> When a Job is despooling (disk->tape), the file daemon will wait. It will
> just begin spooling again (if necessary, i.e., amount of space that can be
> used by the job in the spool area is less then the amount of data that will
> be backed up for this client). The others file daemons will be spooling to
> disk.
>
> So IMHO the more large the spool area you could have for a job, the
> minimum interleaving you will have.
>
> It will depend also if you have jobs with very diversified amounts of
> backup data (total backup size per job). I would choose the average value
> (of the total backup size for each job) for the Maximum Spool Size, if I
> did not had enough space in disk to choose the highest value (the highest
> total backup size that a job could have), to minimize data interleaving.
>
> In any way, you will speed up your backups since the network delay for the
> data travels from client to the storage is greater than the transfer speeds
> from disk to tape (supposing your spool area data is not traveling through
> network).
>
> Best regards,
> Ana
>
> On Tue, Feb 9, 2016 at 12:12 AM, Heitor Faria 
> wrote:
>
>> I will backup to disk first, on another SD.  Later, I will copy the jobs
>> to the tape library on this new SD
>> which is on another server.  The copy jobs will be spooled to local SSD
>> before being written to tape.
>>
>> Sorry about this mess. If you are using disk spooling you don't have to
>> concern about data interleaving, unless your job spool limit is too low.
>>
>> Regards.
>> --
>> ===
>> Heitor Medrado de Faria  - LPIC-III | ITIL-F |  Bacula Systems Certified
>> Administrator II
>> Próximas aulas telepresencial ao-vivo - 15 de fevereiro:
>> http://www.bacula.com.br/agenda/
>> Ministro treinamento e implementação in-company Bacula:
>> http://www.bacula.com.br/in-company/
>> Ou assista minhas videoaulas on-line:
>> http://www.bacula.com.br/treinamento-bacula-ed/
>> 61 <%2B55%2061%202021-8260>8268-4220 <%2B55%2061%208268-4220>
>> Site: www.bacula.com.br | Facebook: heitor.faria
>> 
>> 
>>
>>
>> --
>> Site24x7 APM Insight: Get Deep Visibility into Application Performance
>> APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
>> Monitor end-to-end web transactions and take corrective actions now
>> Troubleshoot faster and improve end-user experience. Signup Now!
>> http://pubads.g.doubleclick.net/gampad/clk?id=272487151=/4140
>> ___
>> Bacula-users mailing list
>> Bacula-users@lists.sourceforge.net
>> https://lists.sourceforge.net/lists/listinfo/bacula-users
>>
>>
>
--
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=272487151=/4140___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] multiple concurrent tape jobs

2016-02-09 Thread Ana Emília M . Arruda
Hello Heitor and Dan,

When a Job is despooling (disk->tape), the file daemon will wait. It will
just begin spooling again (if necessary, i.e., amount of space that can be
used by the job in the spool area is less then the amount of data that will
be backed up for this client). The others file daemons will be spooling to
disk.

So IMHO the more large the spool area you could have for a job, the minimum
interleaving you will have.

It will depend also if you have jobs with very diversified amounts of
backup data (total backup size per job). I would choose the average value
(of the total backup size for each job) for the Maximum Spool Size, if I
did not had enough space in disk to choose the highest value (the highest
total backup size that a job could have), to minimize data interleaving.

In any way, you will speed up your backups since the network delay for the
data travels from client to the storage is greater than the transfer speeds
from disk to tape (supposing your spool area data is not traveling through
network).

Best regards,
Ana

On Tue, Feb 9, 2016 at 12:12 AM, Heitor Faria  wrote:

> I will backup to disk first, on another SD.  Later, I will copy the jobs
> to the tape library on this new SD
> which is on another server.  The copy jobs will be spooled to local SSD
> before being written to tape.
>
> Sorry about this mess. If you are using disk spooling you don't have to
> concern about data interleaving, unless your job spool limit is too low.
>
> Regards.
> --
> ===
> Heitor Medrado de Faria  - LPIC-III | ITIL-F |  Bacula Systems Certified
> Administrator II
> Próximas aulas telepresencial ao-vivo - 15 de fevereiro:
> http://www.bacula.com.br/agenda/
> Ministro treinamento e implementação in-company Bacula:
> http://www.bacula.com.br/in-company/
> Ou assista minhas videoaulas on-line:
> http://www.bacula.com.br/treinamento-bacula-ed/
> 61 <%2B55%2061%202021-8260>8268-4220 <%2B55%2061%208268-4220>
> Site: www.bacula.com.br | Facebook: heitor.faria
> 
> 
>
>
> --
> Site24x7 APM Insight: Get Deep Visibility into Application Performance
> APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
> Monitor end-to-end web transactions and take corrective actions now
> Troubleshoot faster and improve end-user experience. Signup Now!
> http://pubads.g.doubleclick.net/gampad/clk?id=272487151=/4140
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users
>
>
--
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=272487151=/4140___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] multiple concurrent tape jobs

2016-02-09 Thread Josh Fisher


On 2/8/2016 5:42 PM, Dan Langille wrote:

Hello,

I am working with an LTO-4 tape library.  It has two drives but I plan 
to write to only one for backups.


I will backup to disk first, on another SD.  Later, I will copy the 
jobs to the tape library on this new SD
which is on another server.  The copy jobs will be spooled to local 
SSD before being written to tape.


   re 
http://bacula-users.narkive.com/QRkTVfEz/typical-tape-write-performance


Now what I'm thinking of is streaming multiple concurrent jobs to a 
single drive


Are the two servers on a 10G or better network? Unless the disk 
subsystem on the other SD is slow, it will likely stream close to the 1G 
max of 125 MB/s, since it will be essentially sequential reads. I'm not 
convinced that concurrency will gain anything.



.

Sure, downside on restore is interleaving of blocks

I don't see any downsides to going down this path.  I have yet to run 
any copy jobs to the new library,

but it may be ready this week.

Comments?

--
Dan Langille - BSDCan / PGCon
d...@langille.org 






--
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=272487151=/4140


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


--
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=272487151=/4140___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] multiple concurrent tape jobs

2016-02-09 Thread Ana Emília M . Arruda
Hello Dan and Josh,

Sorry, I totally misunderstood the situation here.

It seems to me that data spooling is useful to avoid the tape library be
waiting for data from various slow clients.

I think the LTO-4 drive (120 MB/s full height) write speed and 1GB network
will be the bottleneck. If you could use both drives for writing, then they
would be waiting for data if you have a 1GB network (125 MB/s) and data
coming from only one source (the SD with the original disk volume backups).
Maybe fast disks (or disk arrays) and both hosts using NIC bonding or 10GB
network there would be a gain in performance using concurrent copy jobs.

The local SSD for data spooling seems to me will bring you no gain.

Best regards,
Ana

On Tue, Feb 9, 2016 at 2:30 PM, Josh Fisher  wrote:

>
> On 2/8/2016 5:42 PM, Dan Langille wrote:
>
> Hello,
>
> I am working with an LTO-4 tape library.  It has two drives but I plan to
> write to only one for backups.
>
> I will backup to disk first, on another SD.  Later, I will copy the jobs
> to the tape library on this new SD
> which is on another server.  The copy jobs will be spooled to local SSD
> before being written to tape.
>
>re
> http://bacula-users.narkive.com/QRkTVfEz/typical-tape-write-performance
>
> Now what I'm thinking of is streaming multiple concurrent jobs to a single
> drive
>
>
> Are the two servers on a 10G or better network? Unless the disk subsystem
> on the other SD is slow, it will likely stream close to the 1G max of 125
> MB/s, since it will be essentially sequential reads. I'm not convinced that
> concurrency will gain anything.
>
> .
>
> Sure, downside on restore is interleaving of blocks
>
> I don't see any downsides to going down this path.  I have yet to run any
> copy jobs to the new library,
> but it may be ready this week.
>
> Comments?
>
> --
> Dan Langille - BSDCan / PGCon
> d...@langille.org
>
>
>
>
>
>
> --
> Site24x7 APM Insight: Get Deep Visibility into Application Performance
> APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
> Monitor end-to-end web transactions and take corrective actions now
> Troubleshoot faster and improve end-user experience. Signup 
> Now!http://pubads.g.doubleclick.net/gampad/clk?id=272487151=/4140
>
>
>
> ___
> Bacula-users mailing 
> listBacula-users@lists.sourceforge.nethttps://lists.sourceforge.net/lists/listinfo/bacula-users
>
>
>
>
> --
> Site24x7 APM Insight: Get Deep Visibility into Application Performance
> APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
> Monitor end-to-end web transactions and take corrective actions now
> Troubleshoot faster and improve end-user experience. Signup Now!
> http://pubads.g.doubleclick.net/gampad/clk?id=272487151=/4140
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users
>
>
--
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=272487151=/4140___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] multiple concurrent tape jobs

2016-02-08 Thread Heitor Faria
> Hello,
Hello, Dan. 

> I am working with an LTO-4 tape library. It has two drives but I plan to write
> to only one for backups.

> I will backup to disk first, on another SD. Later, I will copy the jobs to the
> tape library on this new SD
> which is on another server. The copy jobs will be spooled to local SSD before
> being written to tape.

> re http://bacula-users.narkive.com/QRkTVfEz/typical-tape-write-performance

> Now what I'm thinking of is streaming multiple concurrent jobs to a single
> drive.

Did you test if it is possible? (I can test for you in a dummy environment if 
you want). I don't remember if it is or if the bellow quote only applies to 
copy jobs... 

"If the Migration control job finds a number of JobIds to migrate (e.g. it is 
asked to migrate one or more Volumes), it will start one new migration backup 
job for each JobId found on the specified Volumes. Please note that Migration 
doesn't scale too well since Migrations are done on a Job by Job basis. This if 
you select a very large volume or a number of volumes for migration, you may 
have a large number of Jobs that start. Because each job must read the same 
Volume, they will run consecutively (not simultaneously)." 
(http://www.bacula.org/5.2.x-manuals/en/main/main/Migration_Copy.html) 

> Sure, downside on restore is interleaving of blocks

I also remember that I read somewhere that you don't need to concern about that 
when doing copy jobs, but I think it refers to lack of multiplexing quoted 
before. 

> I don't see any downsides to going down this path. I have yet to run any copy
> jobs to the new library,
> but it may be ready this week.

Even without multiplexing, I think you will achieve a hight throughput since 
you are copying sequential data. 
In this case, if you really want to multiplex the copy I think you would have 
to create two different source and destination pools. 

> Comments?

> --
> Dan Langille - BSDCan / PGCon
> d...@langille.org

Regards, 
-- 
=== 
Heitor Medrado de Faria - LPIC-III | ITIL-F | Bacula Systems Certified 
Administrator II 
Próximas aulas telepresencial ao-vivo - 15 de fevereiro: 
http://www.bacula.com.br/agenda/ 
Ministro treinamento e implementação in-company Bacula: 
http://www.bacula.com.br/in-company/ 
Ou assista minhas videoaulas on-line: 
http://www.bacula.com.br/treinamento-bacula-ed/ 
61 8268-4220 
Site: www.bacula.com.br | Facebook: heitor.faria 
 
--
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=272487151=/4140___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] multiple concurrent tape jobs

2016-02-08 Thread Heitor Faria
>> I am working with an LTO-4 tape library. It has two drives but I plan to 
>> write
>> to only one for backups.

>> I will backup to disk first, on another SD. Later, I will copy the jobs to 
>> the
>> tape library on this new SD
>> which is on another server. The copy jobs will be spooled to local SSD before
>> being written to tape.

>> re http://bacula-users.narkive.com/QRkTVfEz/typical-tape-write-performance

>> Now what I'm thinking of is streaming multiple concurrent jobs to a single
>> drive.

> Did you test if it is possible? (I can test for you in a dummy environment if
> you want). I don't remember if it is or if the bellow quote only applies to
> copy jobs...

> "If the Migration control job finds a number of JobIds to migrate (e.g. it is
> asked to migrate one or more Volumes), it will start one new migration backup
> job for each JobId found on the specified Volumes. Please note that Migration
> doesn't scale too well since Migrations are done on a Job by Job basis. This 
> if
> you select a very large volume or a number of volumes for migration, you may
> have a large number of Jobs that start. Because each job must read the same
> Volume, they will run consecutively (not simultaneously)."
> (http://www.bacula.org/5.2.x-manuals/en/main/main/Migration_Copy.html)

Nevermind, Dan. Since version 7.0 I think it is possible multiplexing copy jobs 
and you will be just fine: 

"Migration/Copy/VirtualFull Performance Enhancements 
The Bacula Storage daemon now permits multiple jobs to simultaneously read the 
same disk Volume, which gives substantial performance enhancements when running 
Migration, Copy, or VirtualFull jobs that read disk Volumes. Our testing shows 
that when running multiple simultaneous jobs, the jobs can finish up to ten 
times faster with this version of Bacula. This is built-in to the Storage 
daemon, so it happens automatically and transparently." 

Regards, 
-- 
=== 
Heitor Medrado de Faria - LPIC-III | ITIL-F | Bacula Systems Certified 
Administrator II 
Próximas aulas telepresencial ao-vivo - 15 de fevereiro: 
http://www.bacula.com.br/agenda/ 
Ministro treinamento e implementação in-company Bacula: 
http://www.bacula.com.br/in-company/ 
Ou assista minhas videoaulas on-line: 
http://www.bacula.com.br/treinamento-bacula-ed/ 
61 8268-4220 
Site: www.bacula.com.br | Facebook: heitor.faria 
 
--
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=272487151=/4140___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] multiple concurrent tape jobs

2016-02-08 Thread Heitor Faria
>>> I will backup to disk first, on another SD. Later, I will copy the jobs to 
>>> the
>>> tape library on this new SD
>>> which is on another server. The copy jobs will be spooled to local SSD 
>>> before
>>> being written to tape.

Sorry about this mess. If you are using disk spooling you don't have to concern 
about data interleaving, unless your job spool limit is too low. 

Regards. 
-- 
=== 
Heitor Medrado de Faria - LPIC-III | ITIL-F | Bacula Systems Certified 
Administrator II 
Próximas aulas telepresencial ao-vivo - 15 de fevereiro: 
http://www.bacula.com.br/agenda/ 
Ministro treinamento e implementação in-company Bacula: 
http://www.bacula.com.br/in-company/ 
Ou assista minhas videoaulas on-line: 
http://www.bacula.com.br/treinamento-bacula-ed/ 
61 8268-4220 
Site: www.bacula.com.br | Facebook: heitor.faria 
 
--
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=272487151=/4140___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users