Re: [Bacula-users] multiple concurrent tape jobs

2016-02-09 Thread Ana Emília M . Arruda
Hello Dan and Josh,

Sorry, I totally misunderstood the situation here.

It seems to me that data spooling is useful to avoid the tape library be
waiting for data from various slow clients.

I think the LTO-4 drive (120 MB/s full height) write speed and 1GB network
will be the bottleneck. If you could use both drives for writing, then they
would be waiting for data if you have a 1GB network (125 MB/s) and data
coming from only one source (the SD with the original disk volume backups).
Maybe fast disks (or disk arrays) and both hosts using NIC bonding or 10GB
network there would be a gain in performance using concurrent copy jobs.

The local SSD for data spooling seems to me will bring you no gain.

Best regards,
Ana

On Tue, Feb 9, 2016 at 2:30 PM, Josh Fisher  wrote:

>
> On 2/8/2016 5:42 PM, Dan Langille wrote:
>
> Hello,
>
> I am working with an LTO-4 tape library.  It has two drives but I plan to
> write to only one for backups.
>
> I will backup to disk first, on another SD.  Later, I will copy the jobs
> to the tape library on this new SD
> which is on another server.  The copy jobs will be spooled to local SSD
> before being written to tape.
>
>re
> http://bacula-users.narkive.com/QRkTVfEz/typical-tape-write-performance
>
> Now what I'm thinking of is streaming multiple concurrent jobs to a single
> drive
>
>
> Are the two servers on a 10G or better network? Unless the disk subsystem
> on the other SD is slow, it will likely stream close to the 1G max of 125
> MB/s, since it will be essentially sequential reads. I'm not convinced that
> concurrency will gain anything.
>
> .
>
> Sure, downside on restore is interleaving of blocks
>
> I don't see any downsides to going down this path.  I have yet to run any
> copy jobs to the new library,
> but it may be ready this week.
>
> Comments?
>
> --
> Dan Langille - BSDCan / PGCon
> d...@langille.org
>
>
>
>
>
>
> --
> Site24x7 APM Insight: Get Deep Visibility into Application Performance
> APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
> Monitor end-to-end web transactions and take corrective actions now
> Troubleshoot faster and improve end-user experience. Signup 
> Now!http://pubads.g.doubleclick.net/gampad/clk?id=272487151&iu=/4140
>
>
>
> ___
> Bacula-users mailing 
> listBacula-users@lists.sourceforge.nethttps://lists.sourceforge.net/lists/listinfo/bacula-users
>
>
>
>
> --
> Site24x7 APM Insight: Get Deep Visibility into Application Performance
> APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
> Monitor end-to-end web transactions and take corrective actions now
> Troubleshoot faster and improve end-user experience. Signup Now!
> http://pubads.g.doubleclick.net/gampad/clk?id=272487151&iu=/4140
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users
>
>
--
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=272487151&iu=/4140___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Storage Devices not displaying in bconsole status

2016-02-09 Thread Mingus Dew
Dear All,
 I am having an issue where when I run a status command in bconsole,
select "Storage", I am only presented with the option for status on 3 of my
defined storage resources. I am trying to figure out why this is, but am
being left with a blank. Backups do seem to be running at present, but I
should have a lot more devices to status.

mt-atl-back1-dir Version: 7.0.5 (28 July 2014) x86_64-unknown-linux-gnu
redhat
Daemon started 09-Feb-16 10:18. Jobs: run=2, running=0 mode=0,0
 Heap: heap=548,864 smbytes=592,095 max_bytes=833,177 bufs=1,954
max_bufs=3,683

*status storage
The defined Storage resources are:
 1: TL4000
 2: Mentora_Full_Files
 3: ITAnalytics_Full_Files
Select Storage resource (1-3):


I am not receiving any errors when restarting or reloading Bacula and all
components (FD, SD, DIR) appear to start and work. Here is my bacula-dir,
bacula-sd, and bacula-dir_storage config files

bacula-dir.conf:
Director {
  Name = mt-atl-back1-dir
  DIRport = 9101
  QueryFile = "/opt/bacula/etc/query.sql"
  WorkingDirectory = "/mnt/backup/bacula/var/lib/bacula"
  PidDirectory = "/var/run"
  Maximum Concurrent Jobs = 60
  Password = "###"
  Messages = Daemon
  FD Connect Timeout = 5 minutes
  SD Connect Timeout = 5 minutes
  Statistics Retention = 90 days
}
Catalog {
  Name = Mentora
  dbname = "bacula"
  dbuser = "bacula"
  dbpassword = "##"
  DB Socket = /var/lib/mysql/mysql.sock
}
Messages {
  Name = Standard
  mailcommand = "/opt/bacula/sbin/bsmtp -h localhost -f \"\(Bacula\)
\<%r\>\" -s \"Bacula: %t %e of %c %l\" %r"
  operatorcommand = "/opt/bacula/sbin/bsmtp -h localhost -f \"\(Bacula\)
\<%r\>\" -s \"Bacula: Intervention needed for %j\" %r"
  mail = ?? = all, !skipped
  operator = ?? = mount
  console = all, !skipped, !saved
  append = "/mnt/backup/bacula/var/lib/bacula/log" = all, !skipped
}
Messages {
  Name = Daemon
  mailcommand = "/opt/bacula/sbin/bsmtp -h localhost -f \"\(Bacula\)
\<%r\>\" -s \"Bacula daemon message\" %r"
  mail = ?= all, !skipped
  console = all, !skipped, !saved
  append = "/mnt/backup/bacula/var/lib/bacula/log" = all, !skipped
}
Console {
  Name = mt-atl-back1-mon
  Password = "##"
  CommandACL = status, .status
}
## Included configre files for Storage, Pools, Filesets, Schedules,
Clients, and Jobs
@|"sh -c 'for f in /opt/bacula/etc/conf.d/*.conf; do echo @${f}; done'"
@|"sh -c 'for f in /opt/bacula/etc/conf.d/customer.d/*/*.conf; do echo
@${f}; done'"

bacula-sd.conf:
Storage {
  Name = mt-atl-back1.storage
  WorkingDirectory = /mnt/backup/bacula/var/lib/bacula
  Pid Directory = /var/run
  SDAddresses  = { ip = {
addr = 10.0.50.164; port = 9103; }
ip = {
addr = 10.0.91.164; port = 9103; }
ip = {
addr = 10.0.50.166; port = 9103; }
ip = {
addr = 10.0.70.166; port = 9103; }
  }
   Maximum Concurrent Jobs = 60
   Heartbeat Interval = 5
}
Director {
  Name = mt-atl-back1-dir
  Password = "##"
}
Autochanger {
  Name = TL4000_Library
  Device = TL4000_DTE0, TL4000_DTE1, TL4000_DTE2, TL4000_DTE3
  Changer Command = " /opt/bacula/etc/mtx-changer %c %o %S %a %d"
  Changer Device = /dev/changer
}
Device {
  Name = TL4000_DTE0
  Drive Index = 0
  Media Type = LTO-6
  Changer Device = /dev/changer
  Archive Device = /dev/nst2
  LabelMedia = Yes
  AutomaticMount = Yes
  AlwaysOpen = Yes
  RemovableMedia = Yes
  RandomAccess = No
  AutoChanger = Yes
  Alert Command = "sh -c 'smartctl -H -l error -d scsi %c'"
  Maximum Changer Wait = 900
  Maximum Rewind Wait = 900
  Maximum Open Wait = 900
}
Device {
  Name = TL4000_DTE1
  Drive Index = 1
  Media Type = LTO-6
  Changer Device = /dev/changer
  Archive Device = /dev/nst1
  LabelMedia = Yes
  AutomaticMount = Yes
  AlwaysOpen = Yes
  RemovableMedia = Yes
  RandomAccess = No
  AutoChanger = Yes
  Alert Command = "sh -c 'smartctl -H -l error -d scsi %c'"
  Maximum Changer Wait = 900
  Maximum Rewind Wait = 900
  Maximum Open Wait = 900
}
Device {
  Name = TL4000_DTE2
  Drive Index = 2
  Media Type = LTO-6
  Changer Device = /dev/changer
  Archive Device = /dev/nst0
  LabelMedia = Yes
  AutomaticMount = Yes
  AlwaysOpen = Yes
  RemovableMedia = Yes
  RandomAccess = No
  AutoChanger = Yes
  Alert Command = "sh -c 'smartctl -H -l error -d scsi %c'"
  Maximum Changer Wait = 900
  Maximum Rewind Wait = 900
  Maximum Open Wait = 900
}
Device {
  Name = TL4000_DTE3
  Drive Index = 3
  Media Type = LTO-6
  Changer Device = /dev/changer
  Archive Device = /dev/nst3
  LabelMedia = Yes
  AutomaticMount = Yes
  AlwaysOpen = Yes
  RemovableMedia = Yes
  RandomAccess = No
  AutoChanger = Yes
  Alert Command = "sh -c 'smartctl -H -l error -d scsi %c'"
  Maximum Changer Wait = 900
  Maximum Rewind Wait = 900
  Maximum Open Wait = 900
}
Device {
  Name = Mentora_Full_Device-1
  Device Type = File
  Media Type = Mentora_File
  Archive Device = /mnt/backup/bacula/mentora/full
  LabelMedia = Yes
  Random Access = Yes
  AutomaticMount = yes
  RemovableM

Re: [Bacula-users] Including configuration files breaks backup

2016-02-09 Thread Heitor Faria



The default dbcheck in /usr/sbin/dbcheck does not work properly when including 
additional config files. 
# /sbin/dbcheck -B -c /opt/bacula/conf/bacula-dir.conf 

dbcheck: ERROR TERMINATION at parse_conf.c:1000 
Config error: End of conf file reached with unclosed resource. 
: line 111, col 1 of file /opt/bacula/conf/bacula-dir.conf 


Line 110 of /opt/bacula/conf/bacula-dir.conf: 
@/opt/bacula/conf.d/SCHEDULE.conf 

Whenever you use "@FILENAME" at the end of the director configuration file, the 
dbcheck is an error. Cannot use the default make_catalog_backup.pl 
How else can I backup the MySQL database? 



Hello, Raymond: dbcheck and make_catalog scripts are totally different piece of 
software. 
make_catalog_backup searches for the catalog name in order to fetch database 
connection information and be able to dump it. It's kind of unusual to have 
this information in another file (include) since the most common (by far) is 
having only one catalog resource per Director configuration. 
About dbcheck you can specify DB user and password as command arguments. E.g.: 
dbcheck -b /var/spool/bacula/ bacula root 123456 
I tried to reproduce your error in Bacula 7.4 with a dummy include, but the 
command works even though. Perhaps you have a syntax error in your Bacula 
configuration. 

Regards, 
-- 
=== 
Heitor Medrado de Faria - LPIC-III | ITIL-F | Bacula Systems Certified 
Administrator II 
Próximas aulas telepresencial ao-vivo - 15 de fevereiro: 
http://www.bacula.com.br/agenda/ 
Ministro treinamento e implementação in-company Bacula: 
http://www.bacula.com.br/in-company/ 
Ou assista minhas videoaulas on-line: 
http://www.bacula.com.br/treinamento-bacula-ed/ 
61 8268-4220 
Site: www.bacula.com.br | Facebook: heitor.faria 
 

--
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=272487151&iu=/4140___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Including configuration files breaks backup

2016-02-09 Thread Raymond Burns Jr.
The default dbcheck in /usr/sbin/dbcheck does not work properly when
including additional config files.

# /sbin/dbcheck -B -c /opt/bacula/conf/bacula-dir.conf

dbcheck: ERROR TERMINATION at parse_conf.c:1000
Config error: End of conf file reached with unclosed resource.
: line 111, col 1 of file /opt/bacula/conf/bacula-dir.conf


Line 110 of /opt/bacula/conf/bacula-dir.conf:
*@/opt/bacula/conf.d/SCHEDULE.conf*

Whenever you use "@FILENAME" at the end of the director configuration file,
the dbcheck is an error. Cannot use the default *make_catalog_backup.pl
*
How else can I backup the MySQL database?
--
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=272487151&iu=/4140___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] multiple concurrent tape jobs

2016-02-09 Thread Josh Fisher


On 2/8/2016 5:42 PM, Dan Langille wrote:

Hello,

I am working with an LTO-4 tape library.  It has two drives but I plan 
to write to only one for backups.


I will backup to disk first, on another SD.  Later, I will copy the 
jobs to the tape library on this new SD
which is on another server.  The copy jobs will be spooled to local 
SSD before being written to tape.


   re 
http://bacula-users.narkive.com/QRkTVfEz/typical-tape-write-performance


Now what I'm thinking of is streaming multiple concurrent jobs to a 
single drive


Are the two servers on a 10G or better network? Unless the disk 
subsystem on the other SD is slow, it will likely stream close to the 1G 
max of 125 MB/s, since it will be essentially sequential reads. I'm not 
convinced that concurrency will gain anything.



.

Sure, downside on restore is interleaving of blocks

I don't see any downsides to going down this path.  I have yet to run 
any copy jobs to the new library,

but it may be ready this week.

Comments?

--
Dan Langille - BSDCan / PGCon
d...@langille.org 






--
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=272487151&iu=/4140


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


--
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=272487151&iu=/4140___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Unable to connect to Director daemon on localhost:9101 Client Bacula

2016-02-09 Thread Ana Emília M . Arruda
Hello Hector,

You can run bacula-dir in debug mode (option -d) and see the configuration
errors reported. For example, to start all daemons with a debug level value
of 100:

/pathtoyourbinariesifnecessary/bacula -d100 start

Best regards,
Ana

On Fri, Feb 5, 2016 at 4:11 AM, Hector Javier Agudelo Corredor <
hej...@gmail.com> wrote:

> hi
>
> i am follow this web page
> https://www.digitalocean.com/community/tutorials/how-to-back-up-an-ubuntu-14-04-server-with-bacula
>  because i want to remote backup.
>
>  i configure the file that connect to client as follows
>
>
> file path on the server bacula: /etc/bacula/conf.d/bacula5.conf
>
> Client {
>   Name = bacula5.prueba.net-fd
>   Address = bacula5.prueba.net
>   FDPort = 9102
>   Catalog = MyCatalog
>   Password = "YWUxYzJmY2MxOWI0N2IxNjczYTYzZjQ4Y" # password for
> Remote FileDaemon
>   File Retention = 30 days# 30 days
>   Job Retention = 6 months# six months
>   AutoPrune = yes # Prune expired Jobs/Files
> }
>
> Job {
>   Name = "Backupbacula5remoto"
>   JobDefs = "DefaultJob"
>   Client = bacula5.prueba.net-fd
>   Pool = RemoteFile
>   FileSet="Home and Etc"
> }
>
>
>
>
> FileSet {
>   Name = "Backupbacula5remoto"
>   Include {
> Options {
>   signature = MD5
>   compression = GZIP
> }
> File = /home
> File = /etc
>   }
>   Exclude {
> File = /home/bacula/
>   }
> }
>
> Pool {
>   Name = RemoteFile
>   Pool Type = Backup
>   Label Format = Remote-
>   Recycle = yes   # Bacula can automatically recycle
> Volumes
>   AutoPrune = yes # Prune expired volumes
>   Volume Retention = 365 days # one year
> Maximum Volume Bytes = 50G  # Limit Volume size to something
> reasonable
>   Maximum Volumes = 100   # Limit number of Volumes in Pool
> }
> -
> at the end of file bacula-dir server bacula place the following line to
> load the bacula5.conf
>
> @|"find /etc/bacula/conf.d -name '*.conf' -type f -exec echo @{} \;"
>
> if I comment this line and restart the bacula the bconsole connected, so
> there is an error in the file bacula5.conf
>
> --
>
> this is configuration bacula client fd
>
> Director {
>   Name = bacula7.prueba.net-dir
>   Password = "YWUxYzJmY2MxOWI0N2IxNjczYTYzZjQ4Y"
> }
>
>
> Director {
>   Name = bacula-mon
>   Password = "YWUxYzJmY2MxOWI0N2IxNjczYTY"
>   Monitor = yes
> }
>
> FileDaemon {  # this is me
>   Name = bacula5.prueba.net-fd
>   FDAddress = bacula5.prueba.net
>   FDport = 9102  # where we listen for the director
>   WorkingDirectory = /var/spool/bacula
>   Pid Directory = /var/run
>   Maximum Concurrent Jobs = 20
> }
>
> # Send all messages except skipped files back to Director
> Messages {
>   Name = Standard
>   director = bacula7.prueba.net-dir = all, !skipped, !restored
> }
> How I can solve it?
>
>
>
>
>
>
>
>
>
> --
> HECTOR
>
>
> --
> Site24x7 APM Insight: Get Deep Visibility into Application Performance
> APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
> Monitor end-to-end web transactions and take corrective actions now
> Troubleshoot faster and improve end-user experience. Signup Now!
> http://pubads.g.doubleclick.net/gampad/clk?id=272487151&iu=/4140
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users
>
>
--
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=272487151&iu=/4140___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] multiple concurrent tape jobs

2016-02-09 Thread Ana Emília M . Arruda
If you have very slow network file transfers from your file daemons to your
disk spool area, it would be better to have a not too large data spooling
area. Because despooling would be unnecessary delayed waiting a job to
reach the total amount of spool area dedicated to it.

Best regards,
Ana

On Tue, Feb 9, 2016 at 9:44 AM, Ana Emília M. Arruda  wrote:

> Hello Heitor and Dan,
>
> When a Job is despooling (disk->tape), the file daemon will wait. It will
> just begin spooling again (if necessary, i.e., amount of space that can be
> used by the job in the spool area is less then the amount of data that will
> be backed up for this client). The others file daemons will be spooling to
> disk.
>
> So IMHO the more large the spool area you could have for a job, the
> minimum interleaving you will have.
>
> It will depend also if you have jobs with very diversified amounts of
> backup data (total backup size per job). I would choose the average value
> (of the total backup size for each job) for the Maximum Spool Size, if I
> did not had enough space in disk to choose the highest value (the highest
> total backup size that a job could have), to minimize data interleaving.
>
> In any way, you will speed up your backups since the network delay for the
> data travels from client to the storage is greater than the transfer speeds
> from disk to tape (supposing your spool area data is not traveling through
> network).
>
> Best regards,
> Ana
>
> On Tue, Feb 9, 2016 at 12:12 AM, Heitor Faria 
> wrote:
>
>> I will backup to disk first, on another SD.  Later, I will copy the jobs
>> to the tape library on this new SD
>> which is on another server.  The copy jobs will be spooled to local SSD
>> before being written to tape.
>>
>> Sorry about this mess. If you are using disk spooling you don't have to
>> concern about data interleaving, unless your job spool limit is too low.
>>
>> Regards.
>> --
>> ===
>> Heitor Medrado de Faria  - LPIC-III | ITIL-F |  Bacula Systems Certified
>> Administrator II
>> Próximas aulas telepresencial ao-vivo - 15 de fevereiro:
>> http://www.bacula.com.br/agenda/
>> Ministro treinamento e implementação in-company Bacula:
>> http://www.bacula.com.br/in-company/
>> Ou assista minhas videoaulas on-line:
>> http://www.bacula.com.br/treinamento-bacula-ed/
>> 61 <%2B55%2061%202021-8260>8268-4220 <%2B55%2061%208268-4220>
>> Site: www.bacula.com.br | Facebook: heitor.faria
>> 
>> 
>>
>>
>> --
>> Site24x7 APM Insight: Get Deep Visibility into Application Performance
>> APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
>> Monitor end-to-end web transactions and take corrective actions now
>> Troubleshoot faster and improve end-user experience. Signup Now!
>> http://pubads.g.doubleclick.net/gampad/clk?id=272487151&iu=/4140
>> ___
>> Bacula-users mailing list
>> Bacula-users@lists.sourceforge.net
>> https://lists.sourceforge.net/lists/listinfo/bacula-users
>>
>>
>
--
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=272487151&iu=/4140___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] multiple concurrent tape jobs

2016-02-09 Thread Ana Emília M . Arruda
Hello Heitor and Dan,

When a Job is despooling (disk->tape), the file daemon will wait. It will
just begin spooling again (if necessary, i.e., amount of space that can be
used by the job in the spool area is less then the amount of data that will
be backed up for this client). The others file daemons will be spooling to
disk.

So IMHO the more large the spool area you could have for a job, the minimum
interleaving you will have.

It will depend also if you have jobs with very diversified amounts of
backup data (total backup size per job). I would choose the average value
(of the total backup size for each job) for the Maximum Spool Size, if I
did not had enough space in disk to choose the highest value (the highest
total backup size that a job could have), to minimize data interleaving.

In any way, you will speed up your backups since the network delay for the
data travels from client to the storage is greater than the transfer speeds
from disk to tape (supposing your spool area data is not traveling through
network).

Best regards,
Ana

On Tue, Feb 9, 2016 at 12:12 AM, Heitor Faria  wrote:

> I will backup to disk first, on another SD.  Later, I will copy the jobs
> to the tape library on this new SD
> which is on another server.  The copy jobs will be spooled to local SSD
> before being written to tape.
>
> Sorry about this mess. If you are using disk spooling you don't have to
> concern about data interleaving, unless your job spool limit is too low.
>
> Regards.
> --
> ===
> Heitor Medrado de Faria  - LPIC-III | ITIL-F |  Bacula Systems Certified
> Administrator II
> Próximas aulas telepresencial ao-vivo - 15 de fevereiro:
> http://www.bacula.com.br/agenda/
> Ministro treinamento e implementação in-company Bacula:
> http://www.bacula.com.br/in-company/
> Ou assista minhas videoaulas on-line:
> http://www.bacula.com.br/treinamento-bacula-ed/
> 61 <%2B55%2061%202021-8260>8268-4220 <%2B55%2061%208268-4220>
> Site: www.bacula.com.br | Facebook: heitor.faria
> 
> 
>
>
> --
> Site24x7 APM Insight: Get Deep Visibility into Application Performance
> APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
> Monitor end-to-end web transactions and take corrective actions now
> Troubleshoot faster and improve end-user experience. Signup Now!
> http://pubads.g.doubleclick.net/gampad/clk?id=272487151&iu=/4140
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users
>
>
--
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=272487151&iu=/4140___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users