Re: [Bacula-users] Catalog backup while job running?

2012-04-02 Thread Phil Stracchino
On 04/02/2012 06:06 PM, Stephen Thompson wrote:
> 
> 
> First off, thanks for the response Phil.
> 
> 
> On 04/02/2012 01:11 PM, Phil Stracchino wrote:
>> On 04/02/2012 01:49 PM, Stephen Thompson wrote:
>>> Well, we've made the leap from MyISAM to InnoDB, seems like we win on
>>> transactions, but lose on read speed.
>>
>> If you're finding InnoDB slower than MyISAM on reads, your InnoDB buffer
>> pool is probably too small.
> 
> This is probably true, but I have limited system resources and my File 
> table is almost 300Gb large.

Ah, well, sometimes there's only so much you can allocate.

>> --skip-lock-tables is referred to in the mysqldump documentation, but
>> isn't actually a valid option.  This is actually an increasingly
>> horrible problem with mysqldump.  It has been very poorly maintained,
>> and has barely developed at all in ten or fifteen years.
>>
> 
> This has me confused.  I have jobs that can run, and insert records into 
> the File table, while I am dumping the Catalog.  It's only at the 
> tail-end that a few jobs get the error above.  Wouldn't a locked File 
> table cause all concurrent jobs to fail?

Hmm.  I stand corrected.  I've never seen it listed as an option in the
man page, despite there being one reference to it, but I see that
mysqldump --help does explain it even though the man page doesn't.

In that case, the only thing I can think of is that you have multiple
jobs trying to insert attributes at the same time and the last ones in
line are timing out.

(Locking the table for batch attribute insertion actually isn't
necessary; MySQL can be configured to interleave auto_increment inserts.
 However, that's the way Bacula does it.)

Don't know that I have any helpful suggestions there, then...  sorry.



-- 
  Phil Stracchino, CDK#2 DoD#299792458 ICBM: 43.5607, -71.355
  ala...@caerllewys.net   ala...@metrocast.net   p...@co.ordinate.org
  Renaissance Man, Unix ronin, Perl hacker, SQL wrangler, Free Stater
 It's not the years, it's the mileage.

--
Better than sec? Nothing is better than sec when it comes to
monitoring Big Data applications. Try Boundary one-second 
resolution app monitoring today. Free.
http://p.sf.net/sfu/Boundary-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] automatic mount / unmount after Backup

2012-04-02 Thread Mike Ruskai
I don't think those commands ever worked.  I tried using them as well a 
while back, but had to use Run Before Job and Run After Job to get 
automatic mounting/unmounting to work.  I no longer do that, but that's 
what you'll want to do.

On 4/2/2012 4:00 PM, Oliver Lehmann wrote:
> Hi,
>
> I'm backing up with bacula using an eSATA harddisk I'd like to
> automatically mount before bacula accesses it and unmount it
> after the access is done.
>
> I'm running FreeBSD 9/amd64.
>
> I have the following bacula-sd.conf part:
>
> Device {
> Name = FileStorage
> Media Type = File
> Archive Device = /mnt/backup
> #  Archive Device = /dev/ufs/backup
> LabelMedia = yes;   # lets Bacula label unlabeled media
> Random Access = Yes;
> AutomaticMount = yes;   # when device opened, read it
> RemovableMedia = yes;
> AlwaysOpen = no;
>
> Requires Mount  = yes
> Mount Point = /mnt/backup
> Mount Command   = "/sbin/mount /mnt/backup"
> Unmount Command = "/sbin/umount /dev/ufs/backup /mnt/backup"
> }
>
> While an example Job entry is:
>
> Job {
> Name = client-nudel-files
> Type = Backup
> JobDefs = defaults
> Client = nudel-fd
> FileSet = nudel-files
> Write Bootstrap = "/var/db/bacula/client-nudel-files.bsr"
> Schedule = "WeeklyCycle"
> }
>
>
> I'm using this configuration since around 2009 and the harddisk
> never unmounts or mounts automatically. I is mounted all the
> time but I now want to get this working finally "the right way".
>
> I have no idea what I should configure differently - I thought,
> that regarding the handbook everything is configured right.
>
> But - when I issue a restore command while having the FS unmounted
> (to check if mounting it works) I'm getting:
>
>
> 02-Apr 21:41 backup-sd JobId 409: Warning: acquire.c:239 Read open
>device "FileStorage" (/mnt/backup) Volume "Full-0002"
>failed: ERR=dev.c:568
>Could not open: /mnt/backup/Full-0002,
>ERR=No such file or directory
>
> 02-Apr 21:41 backup-sd JobId 409: Please mount Volume "Full-0002" for:
>   Job:  client-fiori-files-r.2012-04-02_21.41.29_46
>   Storage:  "FileStorage" (/mnt/backup)
>   Pool: Bacula-Pool
>   Media type:   File
>
>
> So.. what am I supposed to do to get this fixed?
>
>   Greetings
>
>
> --
> This SF email is sponsosred by:
> Try Windows Azure free for 90 days Click Here
> http://p.sf.net/sfu/sfd2d-msazure
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users
>


--
Better than sec? Nothing is better than sec when it comes to
monitoring Big Data applications. Try Boundary one-second 
resolution app monitoring today. Free.
http://p.sf.net/sfu/Boundary-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Catalog backup while job running?

2012-04-02 Thread Stephen Thompson


First off, thanks for the response Phil.


On 04/02/2012 01:11 PM, Phil Stracchino wrote:
> On 04/02/2012 01:49 PM, Stephen Thompson wrote:
>> Well, we've made the leap from MyISAM to InnoDB, seems like we win on
>> transactions, but lose on read speed.
>
> If you're finding InnoDB slower than MyISAM on reads, your InnoDB buffer
> pool is probably too small.

This is probably true, but I have limited system resources and my File 
table is almost 300Gb large.

>
>> That aside, I'm seeing something unexpected.  I am now able to
>> successfully run jobs while I use mysqldump to dump the bacula Catalog,
>> except at the very end of the dump there is some sort of contention.  A
>> few of my jobs (3-4 out of 150) that are attempting to despool
>> attritbutes at the tail end of the dump yield this error:
>>
>> Fatal error: sql_create.c:860 Fill File table Query failed: INSERT INTO
>> File (FileIndex, JobId, PathId, FilenameId, LStat, MD5, DeltaSeq) SELECT
>> batch.FileIndex, batch.JobId, Path.PathId,
>> Filename.FilenameId,batch.LStat, batch.MD5, batch.DeltaSeq FROM batch
>> JOIN Path ON (batch.Path = Path.Path) JOIN Filename ON (batch.Name =
>> Filename.Name): ERR=Lock wait timeout exceeded; try restarting transaction
>>
>> I have successful jobs before and after this 'end of the dump' timeframe.
>>
>> It looks like I might be able to "fix" this by increasing my
>> innodb_lock_wait_timeout, but I'd like to understand WHY I need to
>> icnrease it.  Anyone know what's happening at the end of a dump like
>> this that would cause the above error?
>>
>> mysqldump -f --opt --skip-lock-tables --single-transaction bacula
>>   >>bacula.sql
>>
>> Is it the commit on this 'dump' transaction?
>
> --skip-lock-tables is referred to in the mysqldump documentation, but
> isn't actually a valid option.  This is actually an increasingly
> horrible problem with mysqldump.  It has been very poorly maintained,
> and has barely developed at all in ten or fifteen years.
>

This has me confused.  I have jobs that can run, and insert records into 
the File table, while I am dumping the Catalog.  It's only at the 
tail-end that a few jobs get the error above.  Wouldn't a locked File 
table cause all concurrent jobs to fail?


> Table locks are the default behavior of mysqldump, as part of the
> default --opt group.  To override it, you actually have to use
> --skip-opt, than add back in the rest of the options from the --opt
> group that you actually wanted.  There is *no way* to get mysqldump to
> Do The Right Thing for both transactional and non-transactional tables
> in the same run.  it is simply not possible.
>
> My suggestion would be to look at mydumper instead.  It has been written
> by a couple of former MySQL AB support engineers who started with a
> clean sheet of paper, and it is what mysqldump should have become ten
> years ago.  It dumps tables in parallel, doesn't require exclusion of
> schemas that shouldn't be dumped because it knows they shouldn't be
> dumped, doesn't require long strings of arguments to tell it how to
> correctly handle transactional and non-transactional tables because it
> understands both and just Does The Right Thing on a table-by-table
> basis, can dump tables in parallel for better speed, can dump binlogs as
> well as tables, separates the data from the schemas...
>
> Give it a try.
>

Thanks, I'll take a look at it.


> That said, I make my MySQL dump job a lower priority job and run it only
> after all other jobs have completed.  This makes sure I get the most
> current possible data in my catalog dump.  I just recently switched to a
> revised MySQL backup job that uses mydumper with the following simple
> shell script as a ClientRunBeforeJob on a separate host from the actual
> DB server.  (Thus, if the backup client goes down, I still have the live
> DB, and if the DB server goes down, I still have the DB backups on disk.)
>
>
> #!/bin/bash
>
> RETAIN=5
> USER=xx
> PASS=xx
> DUMPDIR=/dbdumps
> HOST=babylon4
> PORT=6446
> TIMEOUT=300
> FMT='%Y%m%d-%T'
> DEST=${DUMPDIR}/${HOST}-$(date +${FMT})
>
> for dir in $(ls -r ${DUMPDIR} | tail -n +${RETAIN})
> do
> echo Deleting ${DUMPDIR}/${dir}
> rm -rf ${DUMPDIR}/${dir}
> done
>
> mydumper -Cce -h ${HOST} -p ${PORT} -u ${USER} --password=${PASS} -o
> ${DEST} -l ${TIMEOUT}
>
>
> Then my Bacula fileset for the DB-backup job just backs up the entire
> /db-dumps directory.
>
>


-- 
Stephen Thompson   Berkeley Seismological Laboratory
step...@seismo.berkeley.edu215 McCone Hall # 4760
404.538.7077 (phone)   University of California, Berkeley
510.643.5811 (fax) Berkeley, CA 94720-4760

--
Better than sec? Nothing is better than sec when it comes to
monitoring Big Data applications. Try Boundary one-second 
resolution app monitoring today. Free.
http://p.sf.net/sfu/Boundary-dev2dev
_

[Bacula-users] automatic mount / unmount after Backup

2012-04-02 Thread Oliver Lehmann
Hi,

I'm backing up with bacula using an eSATA harddisk I'd like to
automatically mount before bacula accesses it and unmount it
after the access is done.

I'm running FreeBSD 9/amd64.

I have the following bacula-sd.conf part:

Device {
   Name = FileStorage
   Media Type = File
   Archive Device = /mnt/backup
#  Archive Device = /dev/ufs/backup
   LabelMedia = yes;   # lets Bacula label unlabeled media
   Random Access = Yes;
   AutomaticMount = yes;   # when device opened, read it
   RemovableMedia = yes;
   AlwaysOpen = no;

   Requires Mount  = yes
   Mount Point = /mnt/backup
   Mount Command   = "/sbin/mount /mnt/backup"
   Unmount Command = "/sbin/umount /dev/ufs/backup /mnt/backup"
}

While an example Job entry is:

Job {
   Name = client-nudel-files
   Type = Backup
   JobDefs = defaults
   Client = nudel-fd
   FileSet = nudel-files
   Write Bootstrap = "/var/db/bacula/client-nudel-files.bsr"
   Schedule = "WeeklyCycle"
}


I'm using this configuration since around 2009 and the harddisk
never unmounts or mounts automatically. I is mounted all the
time but I now want to get this working finally "the right way".

I have no idea what I should configure differently - I thought,
that regarding the handbook everything is configured right.

But - when I issue a restore command while having the FS unmounted
(to check if mounting it works) I'm getting:


02-Apr 21:41 backup-sd JobId 409: Warning: acquire.c:239 Read open
  device "FileStorage" (/mnt/backup) Volume "Full-0002"
  failed: ERR=dev.c:568
  Could not open: /mnt/backup/Full-0002,
  ERR=No such file or directory

02-Apr 21:41 backup-sd JobId 409: Please mount Volume "Full-0002" for:
 Job:  client-fiori-files-r.2012-04-02_21.41.29_46
 Storage:  "FileStorage" (/mnt/backup)
 Pool: Bacula-Pool
 Media type:   File


So.. what am I supposed to do to get this fixed?

 Greetings


--
This SF email is sponsosred by:
Try Windows Azure free for 90 days Click Here 
http://p.sf.net/sfu/sfd2d-msazure
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] errors when trying to backup to 2nd HDD

2012-04-02 Thread Josh Fisher

On 4/2/2012 3:08 PM, Murray Davis wrote:
> Thank you, Josh, for your response. I ended up doing two things...
>
> 1) I changed the permissions on /mnt/sdb1 using I chmod 777 so 
> everyone has read/write/execute privilege. This seemed like overkill 
> since I thought that my problem was related to my labels and pools not 
> being defined properly. So, I did step 2.
>
> 2) I used the conf files for bacula-dir.conf and bacula-sd.conf as 
> described in Automated Disk Backup from the bacula manual to define my 
> pools.
>
> I ran a test full backup for job, backupclient1, my default job, and 
> it ran successfully. What I found interesting is the permissions on 
> the backup file. Take a look...
>
> root@cablemon sdb1/backups# ls -la
> total 16736
> drwxrwxrwx 2 root   bacula 4096 Apr  2 10:39 .
> drwxrwxrwx 5 root   bacula 4096 Apr  2 10:38 ..
> -rw-r- 1 bacula tape   17127387 Apr  2 10:39 Full-0001
> root@cablemon sdb1/backups#
>
> I didn't even know that I had a group called tape. Here is that 
> group...note it has one member...bacula!
>
> root@cablemon sdb1/backups# cat /etc/group | grep tape
> tape:x:26:bacula
>
> Thinking that permissions 777 were too open, I changed them back to 
> 775 and re-ran my backup job. It failed again. I checked the groups 
> again and saw that there was also a "disk" group with no members. I 
> added bacula to that group and re-ran my job. It failed. I then set my 
> permissions back to 777 and ran my job. It succeeded.

It looks like the bacula-sd daemon is configured to run as bacula:tape, 
therefore it creates files owned by bacula:tape. Simply change the owner 
and permissions of sdb1/backups to allow the user bacula-sd runs as 
(bacula:tape) to write to it.

# chown -R bacula:tape /mnt/sdb1/backups
# chmod 750 /mnt/sdb1/backups


--
This SF email is sponsosred by:
Try Windows Azure free for 90 days Click Here 
http://p.sf.net/sfu/sfd2d-msazure
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Catalog backup while job running?

2012-04-02 Thread Phil Stracchino
On 04/02/2012 01:49 PM, Stephen Thompson wrote:
> Well, we've made the leap from MyISAM to InnoDB, seems like we win on 
> transactions, but lose on read speed.

If you're finding InnoDB slower than MyISAM on reads, your InnoDB buffer
pool is probably too small.

> That aside, I'm seeing something unexpected.  I am now able to 
> successfully run jobs while I use mysqldump to dump the bacula Catalog, 
> except at the very end of the dump there is some sort of contention.  A 
> few of my jobs (3-4 out of 150) that are attempting to despool 
> attritbutes at the tail end of the dump yield this error:
> 
> Fatal error: sql_create.c:860 Fill File table Query failed: INSERT INTO 
> File (FileIndex, JobId, PathId, FilenameId, LStat, MD5, DeltaSeq) SELECT 
> batch.FileIndex, batch.JobId, Path.PathId, 
> Filename.FilenameId,batch.LStat, batch.MD5, batch.DeltaSeq FROM batch 
> JOIN Path ON (batch.Path = Path.Path) JOIN Filename ON (batch.Name = 
> Filename.Name): ERR=Lock wait timeout exceeded; try restarting transaction
> 
> I have successful jobs before and after this 'end of the dump' timeframe.
> 
> It looks like I might be able to "fix" this by increasing my 
> innodb_lock_wait_timeout, but I'd like to understand WHY I need to 
> icnrease it.  Anyone know what's happening at the end of a dump like 
> this that would cause the above error?
> 
> mysqldump -f --opt --skip-lock-tables --single-transaction bacula 
>  >>bacula.sql
> 
> Is it the commit on this 'dump' transaction?

--skip-lock-tables is referred to in the mysqldump documentation, but
isn't actually a valid option.  This is actually an increasingly
horrible problem with mysqldump.  It has been very poorly maintained,
and has barely developed at all in ten or fifteen years.

Table locks are the default behavior of mysqldump, as part of the
default --opt group.  To override it, you actually have to use
--skip-opt, than add back in the rest of the options from the --opt
group that you actually wanted.  There is *no way* to get mysqldump to
Do The Right Thing for both transactional and non-transactional tables
in the same run.  it is simply not possible.

My suggestion would be to look at mydumper instead.  It has been written
by a couple of former MySQL AB support engineers who started with a
clean sheet of paper, and it is what mysqldump should have become ten
years ago.  It dumps tables in parallel, doesn't require exclusion of
schemas that shouldn't be dumped because it knows they shouldn't be
dumped, doesn't require long strings of arguments to tell it how to
correctly handle transactional and non-transactional tables because it
understands both and just Does The Right Thing on a table-by-table
basis, can dump tables in parallel for better speed, can dump binlogs as
well as tables, separates the data from the schemas...

Give it a try.

That said, I make my MySQL dump job a lower priority job and run it only
after all other jobs have completed.  This makes sure I get the most
current possible data in my catalog dump.  I just recently switched to a
revised MySQL backup job that uses mydumper with the following simple
shell script as a ClientRunBeforeJob on a separate host from the actual
DB server.  (Thus, if the backup client goes down, I still have the live
DB, and if the DB server goes down, I still have the DB backups on disk.)


#!/bin/bash

RETAIN=5
USER=xx
PASS=xx
DUMPDIR=/dbdumps
HOST=babylon4
PORT=6446
TIMEOUT=300
FMT='%Y%m%d-%T'
DEST=${DUMPDIR}/${HOST}-$(date +${FMT})

for dir in $(ls -r ${DUMPDIR} | tail -n +${RETAIN})
do
   echo Deleting ${DUMPDIR}/${dir}
   rm -rf ${DUMPDIR}/${dir}
done

mydumper -Cce -h ${HOST} -p ${PORT} -u ${USER} --password=${PASS} -o
${DEST} -l ${TIMEOUT}


Then my Bacula fileset for the DB-backup job just backs up the entire
/db-dumps directory.


-- 
  Phil Stracchino, CDK#2 DoD#299792458 ICBM: 43.5607, -71.355
  ala...@caerllewys.net   ala...@metrocast.net   p...@co.ordinate.org
  Renaissance Man, Unix ronin, Perl hacker, SQL wrangler, Free Stater
 It's not the years, it's the mileage.

--
This SF email is sponsosred by:
Try Windows Azure free for 90 days Click Here 
http://p.sf.net/sfu/sfd2d-msazure
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] errors when trying to backup to 2nd HDD

2012-04-02 Thread Murray Davis
Thank you, Josh, for your response. I ended up doing two things...

1) I changed the permissions on /mnt/sdb1 using I chmod 777 so everyone has
read/write/execute privilege. This seemed like overkill since I thought
that my problem was related to my labels and pools not being defined
properly. So, I did step 2.

2) I used the conf files for bacula-dir.conf and bacula-sd.conf as
described in Automated Disk Backup from the bacula manual to define my
pools.

I ran a test full backup for job, backupclient1, my default job, and it ran
successfully. What I found interesting is the permissions on the backup
file. Take a look...

root@cablemon sdb1/backups# ls -la
total 16736
drwxrwxrwx 2 root   bacula 4096 Apr  2 10:39 .
drwxrwxrwx 5 root   bacula 4096 Apr  2 10:38 ..
-rw-r- 1 bacula tape   17127387 Apr  2 10:39 Full-0001
root@cablemon sdb1/backups#

I didn't even know that I had a group called tape. Here is that
group...note it has one member...bacula!

root@cablemon sdb1/backups# cat /etc/group | grep tape
tape:x:26:bacula

Thinking that permissions 777 were too open, I changed them back to 775 and
re-ran my backup job. It failed again. I checked the groups again and saw
that there was also a "disk" group with no members. I added bacula to that
group and re-ran my job. It failed. I then set my permissions back to 777
and ran my job. It succeeded.

So, even though "bacula" is a member of both "tape" and "disk", the backup
fails unless "othe"r has write permissions on /mnt/sdb1.

Obviously either I don't understand how permissions work or some other
account is writing to disk...so the other group must have write permission.

Well, at least the backups are working. the second backup was entitled;
Inc-002.
--
This SF email is sponsosred by:
Try Windows Azure free for 90 days Click Here 
http://p.sf.net/sfu/sfd2d-msazure___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Chained copy job

2012-04-02 Thread Dennis Hoppe
Hello,

is it possible to use chained copy jobs? For example i would like to
copy my full backups from local disk to usb disk and after that to an
nas storage.

Job {
  Name = "backup-all"
  JobDefs = "DefaultBackup"
  Client = backup-fd
  FileSet = "backup-all"
  Storage = backup
  Full Backup Pool = backup-monthly
  Incremental Backup Pool = backup-daily
  Differential Backup Pool = backup-weekly
}

Job {
  Name = "backup-copy-monthly-usb"
  JobDefs = "DefaultCopy"
  Client = backup-fd
  FileSet = "backup-all"
  Schedule = "MonthlyCopy"
  Storage = backup
  Pool = backup-monthly
  Selection Pattern = "
SELECT max(jobid)
FROM job
WHERE name = 'backup-all'
AND type = 'B'
AND level = 'F'
AND jobstatus = 'T';"
}

Job {
  Name = "backup-copy-monthly-nas"
  JobDefs = "DefaultCopy"
  Client = backup-fd
  FileSet = "backup-all"
  Schedule = "MonthlyCopy2"
  Storage = backup
  Pool = backup-monthly
  Selection Pattern = "
SELECT max(jobid)
FROM job
WHERE name = 'backup-copy-monthly-usb'
AND type = 'c'
AND level = 'F'
AND jobstatus = 'T';"
}

Pool {
  Name = backup-monthly
  Pool Type = Backup
  Recycle = yes
  RecyclePool = scratch
  AutoPrune = yes
  ActionOnPurge = Truncate
  Volume Retention = 2 months
  Volume Use Duration = 23 hours
  LabelFormat =
"backup-full_${Year}-${Month:p/2/0/r}-${Day:p/2/0/r}-${Hour:p/2/0/r}:${Minute:p/2/0/r}"
  Next Pool = backup-monthly-usb
}

Pool {
  Name = backup-monthly-usb
  Storage = backup-usb
  Pool Type = Backup
  Recycle = yes
  RecyclePool = scratch
  AutoPrune = yes
  ActionOnPurge = Truncate
  Volume Retention = 2 months
  Volume Use Duration = 23 hours
  LabelFormat =
"backup-full_${Year}-${Month:p/2/0/r}-${Day:p/2/0/r}-${Hour:p/2/0/r}:${Minute:p/2/0/r}"
  Next Pool = backup-monthly-nas
}

Pool {
  Name = backup-daily-nas
  Storage = backup-nas
  Pool Type = Backup
  Recycle = yes
  RecyclePool = scratch
  AutoPrune = yes
  ActionOnPurge = Truncate
  Volume Retention = 7 days
  Volume Use Duration = 23 hours
  LabelFormat =
"backup-incr_${Year}-${Month:p/2/0/r}-${Day:p/2/0/r}-${Hour:p/2/0/r}:${Minute:p/2/0/r}"
}

If i run the sql statement from "backup-copy-monthly-nas" by hand, the
correct jobid is selected which should get the "read storage", "write
storage" and "next pool" from the job "backup-copy-monthly-usb".
Unfortunatly, bacula ignores the sql statement and is getting the jobid
from "backup-all" which will end in an duplicate copy at the storage
"backup-usb". :(

Did i something wrong or is not bacula able to use two different "next
pools" / "storages"?

Regards, Dennis



signature.asc
Description: OpenPGP digital signature
--
This SF email is sponsosred by:
Try Windows Azure free for 90 days Click Here 
http://p.sf.net/sfu/sfd2d-msazure___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Catalog backup while job running?

2012-04-02 Thread Stephen Thompson
On 02/06/2012 02:45 PM, Phil Stracchino wrote:
> On 02/06/2012 05:02 PM, Stephen Thompson wrote:
>> So, my question is whether anyone had any ideas about the feasibility of
>> getting a backup of the Catalog while a single "long-running" job is
>> active?  This could be in-band (database dump) or out-of-band (copy of
>> database directory on filesystem or slave database server taken
>> offline).  We are using MySQL, but would not be opposed to switching to
>> PostGRES if it buys us anything in this regard.
>>
>> What I wonder specifically (in creating my own solution) is:
>> 1) If I backup the MySQL database directory, or sync to a slave server
>> and create a dump from that, am I simply putting the active
>> "long-running" job records at risk of being incoherent, or am I risking
>> the integrity of the whole Catalog in doing so?
>> 2) If I attempt a dump of the MySQL catalog and lock the tables while
>> doing so, what will the results be to the active "long-running" job?
>> Will it crap out or simply pause and wait for database access when it
>> needs to read/write to the database?  And if so, how long will it wait?
>
> Stephen,
> Three suggestions here.
>
> Route 1:
> Set up a replication slave and perform your backups from the slave.  If
> the slave falls behind the master while you're dumping the DB, you don't
> really care all that much.  It doesn't impact your production DB.
>
> Route 2:
> If you're not using InnoDB in MySQL, you should be by now.  So look into
> the --skip-opt and --single-transaction options to mysqldump to dump all
> of the transactional tables consistently without locking them.  Your
> grant tables will still need a read lock, but hey, you weren't planning
> on rewriting your grant tables every day, were you...?
>


Well, we've made the leap from MyISAM to InnoDB, seems like we win on 
transactions, but lose on read speed.

That aside, I'm seeing something unexpected.  I am now able to 
successfully run jobs while I use mysqldump to dump the bacula Catalog, 
except at the very end of the dump there is some sort of contention.  A 
few of my jobs (3-4 out of 150) that are attempting to despool 
attritbutes at the tail end of the dump yield this error:

Fatal error: sql_create.c:860 Fill File table Query failed: INSERT INTO 
File (FileIndex, JobId, PathId, FilenameId, LStat, MD5, DeltaSeq) SELECT 
batch.FileIndex, batch.JobId, Path.PathId, 
Filename.FilenameId,batch.LStat, batch.MD5, batch.DeltaSeq FROM batch 
JOIN Path ON (batch.Path = Path.Path) JOIN Filename ON (batch.Name = 
Filename.Name): ERR=Lock wait timeout exceeded; try restarting transaction

I have successful jobs before and after this 'end of the dump' timeframe.

It looks like I might be able to "fix" this by increasing my 
innodb_lock_wait_timeout, but I'd like to understand WHY I need to 
icnrease it.  Anyone know what's happening at the end of a dump like 
this that would cause the above error?

mysqldump -f --opt --skip-lock-tables --single-transaction bacula 
 >>bacula.sql

Is it the commit on this 'dump' transaction?

thanks!
Stephen





> Route 3:
> Look into an alternate DB backup solution like mydumper or Percona
> XtraBackup.
>
> Route 4:
> Do you have the option of taking a snapshot of your MySQL datadir and
> backing up the snapshot?  This can be viable if you have a small DB and
> fast copy-on-write snapshots.  (It's the technique I'm using at the
> moment, though I'm considering a switch to mydumper.)
>
>


-- 
Stephen Thompson   Berkeley Seismological Laboratory
step...@seismo.berkeley.edu215 McCone Hall # 4760
404.538.7077 (phone)   University of California, Berkeley
510.643.5811 (fax) Berkeley, CA 94720-4760

--
This SF email is sponsosred by:
Try Windows Azure free for 90 days Click Here 
http://p.sf.net/sfu/sfd2d-msazure
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Añadir varios clientes a un Job

2012-04-02 Thread Phil Stracchino
On 04/02/2012 11:10 AM, Phil Stracchino wrote:
> On 04/02/2012 10:39 AM, Juan Pablo Botero wrote:
>>
>> Hi All.
>>
>> I'm sorry for the message in Spanish before.
>>
>> How can I add more thant one cliente to a job?
> 
> You don't.  You create a job per client.

Now, what you CAN do is create a JobDefs record that contains all of the
common details about the backup jobs for all of your clients, then
create Job records that basically contain just the job name, the client
name, and which JobDefs to use.


-- 
  Phil Stracchino, CDK#2 DoD#299792458 ICBM: 43.5607, -71.355
  ala...@caerllewys.net   ala...@metrocast.net   p...@co.ordinate.org
  Renaissance Man, Unix ronin, Perl hacker, SQL wrangler, Free Stater
 It's not the years, it's the mileage.

--
This SF email is sponsosred by:
Try Windows Azure free for 90 days Click Here 
http://p.sf.net/sfu/sfd2d-msazure
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Best method for managing mtx "Data Transfer Element" numbers to scsi tape

2012-04-02 Thread mark . bergman
In the message dated: Mon, 02 Apr 2012 11:12:35 EDT,
The pithy ruminations from "Clark, Patricia A." on 
<[Bacula-users] Best method for managing mtx "Data Transfer Element" numbers to
 scsi tape> were:
=> I am new to Bacula and I am in the process of installing and configuring the 
software.  Somethin
=> g that is giving me some headaches is the mtx numbering on the drives vs the 
/dev/st assignments
=> .  The output of the mtx status on the auto changer and the the device 
assignment is below:
=> 
=> 
=> Data Transfer Element 0:Empty  <-- /dev/st5
=> 
=> Data Transfer Element 1:Empty  <-- /dev/st1
=> 
=> Data Transfer Element 2:Empty  <-- /dev/st0
=> 
=> Data Transfer Element 3:Empty  <-- /dev/st2
=> 
=> Data Transfer Element 4:Empty  <-- /dev/st3
=> 
=> Data Transfer Element 5:Empty  <-- /dev/st4

You almost certainly want to use the non-rewinding devices (/dev/nst*).

=> 
=> 
=> Ideally, I'd like the numbering to correspond, but I'm finding that would 
take writing some udev
=>  rules.  I don't have any experience in doing that and most examples 
reference writing for sd.  
=> I know I assign the index number to the device in bacula-sd.  So, should the 
naming convention w
=> ork best to match the transfer element or the scsi tape number?  I'v found 
some references to th
=> e use of environment variables for the tape devices.  I can see where this 
would be helpful in s
=> cripts.  What is the "rule of thumb" for addressing this issue?

The environment variable ($TAPE) is not very useful, as Bacula almost
certainly ignores it, as it can only be set to a single value at a time, and
as it would need to be redefined after any event that might change the
correspondence between the actual hardware devices and the device files in
/dev/[n]st*.

The number corresponding to each /dev/st* file is not fixed--it will
change on reboots or SCSI device rediscovery depending on the order in
which devices are found. The numbering will be different across multiple
servers on the same SAN.

The "rule of thumb" is to find some unique information about each hardware
device (fibre WWN, serial number, etc) and create a link between the
/dev/st* (or /dev/sg* for the mtx changer device) file and the symbolic
name you will use within Bacula.

So, bacula will be configured to use, for example, /dev/tape1, /dev/tape2,
/dev/changer1, etc., where those are each symbolic links to the /dev/st*
and /dev/sg* files.

Those symbolic links must be recreated on each reboot, and potentially due to
events like SCSI bus rescans or fibre LIP resets.

The easiest way under Linux to trigger checking and recreating those
links is via udev.


Let me provide a specific example.

We've got a fibre-attached tape library with two drives. The library has
dual fibre connections. With multipathing, this means that each server on the
SAN may see 2 changer devices and 4 drives (subject to SAN zoning).

Here are our /etc/udev/rules.d/55-bacula rules:

-
KERNEL=="sg*", SUBSYSTEM=="scsi_generic", SYSFS{type}=="8", PROGRAM="scsi_id -g 
-u -s %p", RESULT=="1ADIC_A0C0081018_LLA_", SYMLINK+="changer-ml6000"


KERNEL=="nst*", SUBSYSTEM=="scsi_tape", PROGRAM=="/usr/bin/sginfo -s %N", 
RESULT=="*F0A1BBC004*", SYMLINK+="tape1-ml6000"

KERNEL=="nst*", SUBSYSTEM=="scsi_tape", PROGRAM=="/usr/bin/sginfo -s %N", 
RESULT=="*F0A1BBC000*", SYMLINK+="tape0-ml6000"
-

The multipath rules use the "scsi_id" and "sginfo" programs to query each
"sg*" device and each "nst*" device for unique identifiers (the model/serial
number for the changer, the fibre WWN for each tape device), and then sets
symbolic links using our naming convention.

Regardless of the /dev/nst* device number, or which multipath to the hardware
is discovered first, /dev/tape0-ml6000 will always be a symbolic link to the
tape drive with the fibre WWN containing the string "F0A1BBC000". I manually
determined that this string will correspond to drive 0.

I hope this helps.

Mark


=> Patti Clark
=> Information International Associates, Inc.
=> Linux Administrator and subcontractor to:
=> Research and Development Systems Support Oak Ridge National Laboratory
=> 
=> 

Mark Bergman  voice: 215-662-7310
mark.berg...@uphs.upenn.edu fax: 215-614-0266
System Administrator Section of Biomedical Image Analysis
Department of RadiologyUniversity of Pennsylvania
  PGP Key: https://www.rad.upenn.edu/sbia/bergman 

- Text below this line was added without my consent -

--
This SF email is sponsosred by:
Try Windows Azure free for 90 days Click Here 
http://p.sf.net/sfu/sfd2d-msazure
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] restore : it blocks on empty directory with Current Dir Node has no children

2012-04-02 Thread Jacky Carimalo
After modifying src/dird/ua_tree.c and recompiling bacula, it's ok now.

It needs comments in src/dird/ua_tree.c :

...
static int cdcmd(UAContext *ua, TREE_CTX *tree)
{
TREE_NODE *node;
char cwd[2000];


if (ua->argc != 2) {
   ua->error_msg(_("Too few or too many arguments. Try using double 
quotes.\n"));
   return 1;
}
/*
  * Rem JCA, bug : blocks with Node has no children.
  *  if (!tree_node_has_child(tree->node)) {
  * ua->send_msg(_("Node %s has no children.\n"), tree->node->fname);
  * return 1;
  *  }
  */

node = tree_cwd(ua->argk[1], tree->root, tree->node);
if (!node) {
   /* Try once more if Win32 drive -- make absolute */
   if (ua->argk[1][1] == ':') {  /* win32 drive */
  bstrncpy(cwd, "/", sizeof(cwd));
  bstrncat(cwd, ua->argk[1], sizeof(cwd));
  node = tree_cwd(cwd, tree->root, tree->node);
   }
   if (!node) {
  ua->warning_msg(_("Invalid path given.\n"));
   } else {
  tree->node = node;
   }
} else {
   tree->node = node;
}
return pwdcmd(ua, tree);
}
...

Jacky

Le 22/03/2012 15:29, Jacky Carimalo a écrit :
> Hi,
>
> I have a problem with restore, that gives :
>
> With bat, I choose :
> Client, List jobs of client and select a Job,
> Restore from time,
> I obtain the list of files and directories,
> But when I click on an empty directory,
> I have :
>
> Current Dir Node truc has no children
>
> and even if I click on another directory, this message stays and I can't
> see anymore the contents of others directories.
>
> I have the same problem when I restore with the bconsole.
>
> (For information, brestore works for small jobs, but I have a spinning
> circle for big jobs : it's another problem.)
>
> Configuration :
>  Host: x86_64-linux-gnu -- debian 6.0.4
>  Bacula version: Bacula 5.2.6 (21 February 2012)
>  Database : PostgreSQL 9.1
>  Bat : bat 5.2.6 (21 February 2012)
>
> Does anyone has the same problem and solution ?
>
> Thanks
> Jacky
>
>
>
> --
> This SF email is sponsosred by:
> Try Windows Azure free for 90 days Click Here
> http://p.sf.net/sfu/sfd2d-msazure
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users


-- 
Jacky CARIMALO
Université de Nantes
Direction des Systèmes d'Information
Tel : 02 53 48 49 22 (en interne : 22 49 22)
Fax : 02 53 48 49 09


--
This SF email is sponsosred by:
Try Windows Azure free for 90 days Click Here 
http://p.sf.net/sfu/sfd2d-msazure
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Full backup forced if client changes

2012-04-02 Thread Steve Thompson
On Sat, 24 Mar 2012, James Harper wrote:

>> more than one client is available to backup the (shared) storage. If I change
>> the name of the client in the Job definition, a full backup always occurs the
>> next time a job is run. How do I avoid this?
>
> That's definitely going to confuse Bacula. As far as it is concerned you 
> are backing up a separate client with separate storage.

I still don't follow this. The client has changed, but everything else 
(pool, storage, catalog, etc) is the same. I don't see why a full backup 
is forced, or why bacula should be even slightly confused.

Steve

--
This SF email is sponsosred by:
Try Windows Azure free for 90 days Click Here 
http://p.sf.net/sfu/sfd2d-msazure
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Añadir varios clientes a un Job

2012-04-02 Thread Phil Stracchino
On 04/02/2012 10:39 AM, Juan Pablo Botero wrote:
> 
> Hi All.
> 
> I'm sorry for the message in Spanish before.
> 
> How can I add more thant one cliente to a job?

You don't.  You create a job per client.


-- 
  Phil Stracchino, CDK#2 DoD#299792458 ICBM: 43.5607, -71.355
  ala...@caerllewys.net   ala...@metrocast.net   p...@co.ordinate.org
  Renaissance Man, Unix ronin, Perl hacker, SQL wrangler, Free Stater
 It's not the years, it's the mileage.

--
This SF email is sponsosred by:
Try Windows Azure free for 90 days Click Here 
http://p.sf.net/sfu/sfd2d-msazure
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Best method for managing mtx "Data Transfer Element" numbers to scsi tape

2012-04-02 Thread Clark, Patricia A.
I am new to Bacula and I am in the process of installing and configuring the 
software.  Something that is giving me some headaches is the mtx numbering on 
the drives vs the /dev/st assignments.  The output of the mtx status on the 
auto changer and the the device assignment is below:


Data Transfer Element 0:Empty  <-- /dev/st5

Data Transfer Element 1:Empty  <-- /dev/st1

Data Transfer Element 2:Empty  <-- /dev/st0

Data Transfer Element 3:Empty  <-- /dev/st2

Data Transfer Element 4:Empty  <-- /dev/st3

Data Transfer Element 5:Empty  <-- /dev/st4


Ideally, I'd like the numbering to correspond, but I'm finding that would take 
writing some udev rules.  I don't have any experience in doing that and most 
examples reference writing for sd.  I know I assign the index number to the 
device in bacula-sd.  So, should the naming convention work best to match the 
transfer element or the scsi tape number?  I'v found some references to the use 
of environment variables for the tape devices.  I can see where this would be 
helpful in scripts.  What is the "rule of thumb" for addressing this issue?

Patti Clark
Information International Associates, Inc.
Linux Administrator and subcontractor to:
Research and Development Systems Support Oak Ridge National Laboratory


--
This SF email is sponsosred by:
Try Windows Azure free for 90 days Click Here 
http://p.sf.net/sfu/sfd2d-msazure
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Añadir varios clientes a un Job

2012-04-02 Thread Juan Pablo Botero
Hi All.

I'm sorry for the message in Spanish before.

How can I add more thant one cliente to a job?

Thanks.



>
>
> --
> Cordialmente:
> Juan Pablo Botero
> Administrador de Sistemas informáticos
> Fedora Ambassador for Colombia
> http://www.jpilldev.net
>
>



-- 
Cordialmente:
Juan Pablo Botero
Administrador de Sistemas informáticos
Fedora Ambassador for Colombia
http://www.jpilldev.net
--
This SF email is sponsosred by:
Try Windows Azure free for 90 days Click Here 
http://p.sf.net/sfu/sfd2d-msazure___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] errors when trying to backup to 2nd HDD

2012-04-02 Thread Josh Fisher

On 3/30/2012 5:54 PM, Murray Davis wrote:
> ...
> Here are the permissions for my second hard drive...
>
> root@cablemon /mnt/sdb1# ls -la
> total 28
> drwxrwxr-x 4 root bacula  4096 Mar 30 10:14 .
> drwxrwxr-x 3 root bacula  4096 Mar 29 15:10 ..
> drwxrwxr-x 2 root bacula  4096 Mar 30 10:14 backups
> drwx-- 2 root root   16384 Mar 29 15:01 lost+found

Most likely, the storage daemon is running as uid=bacula gid=disk, and 
so does not have write permissions on /mnt/sdb1. The permissions must be 
such that the storage daemon, not the director, has read/write access.



--
This SF email is sponsosred by:
Try Windows Azure free for 90 days Click Here 
http://p.sf.net/sfu/sfd2d-msazure
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] RunScript and Redirecting output.

2012-04-02 Thread Marco van Wieringen
Rob Becker  2co.com> writes:

> 
> I'd assume putting everything into a shell script would work, but that's
exactly what I'm trying to get away
> from.  Putting a new script on every server would require change control,
documentationetc.   If I'm
> able to control everything from the Director I can side step some of those
processes.
> 
Other then thats a interesting way of getting from proper change
control it ain't going to work without shell scripts or other trickery.

Each runscript in bacula not being a console command is run
via a bpipe (not the plugin) e.g. the program forks and performs
an execvp in the child and the output is captured in the parent
via a pipe. So any fancy shell escapes won't work as execvp
just simply executes the arguments you give it without a shell
being invoked. The only way to probably get around it is via
a sh -c "cmdline" and given you do rather elementary things
you might get away with that. In the last example you execvp a shell
and then you can do the shell redirection etc.

Marco


--
This SF email is sponsosred by:
Try Windows Azure free for 90 days Click Here 
http://p.sf.net/sfu/sfd2d-msazure
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] RunScript and Redirecting output.

2012-04-02 Thread Rob Becker
I'd assume putting everything into a shell script would work, but that's 
exactly what I'm trying to get away from.  Putting a new script on every server 
would require change control, documentationetc.   If I'm able to control 
everything from the Director I can side step some of those processes.




-Original Message-
From: Graham Keeling [mailto:gra...@equiinet.com]
Sent: Monday, April 02, 2012 7:04 AM
To: bacula-users@lists.sourceforge.net
Subject: Re: [Bacula-users] RunScript and Redirecting output.

On Mon, Apr 02, 2012 at 10:56:43AM +, Rob Becker wrote:
> RunScript {
> RunsWhen  = Before
> Runs On Client = Yes
> Command = "/bin/echo `/bin/hostname`  >
> /usr/local/bacula/working/restore_file"
>Command = "/bin/date +%%F >> /usr/local/bacula/working/restore_file"
>   }
...
> It looks like Bacula just ignores everything after, and including, the
> greater than sign.

I think that the problem is probably that your greater than signs are shell 
features.
You could try putting your commands into a shell script and running the script 
instead.


--
This SF email is sponsosred by:
Try Windows Azure free for 90 days Click Here http://p.sf.net/sfu/sfd2d-msazure 
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users

Follow 2Checkout! [http://assets.2co.com/legal-disclaimer/facebook.png] 
  
[http://assets.2co.com/legal-disclaimer/twitter.png] 
  
[http://assets.2co.com/legal-disclaimer/linkedin.png] 




CONFIDENTIALITY STATEMENT: All information included in this communication, 
including attachment(s), is intended solely for delivery to and authorized use 
by the addressee(s) identified above, and may contain privileged, confidential, 
proprietary and/or trade secret information entitled to protection and/or 
exempt from disclosure under applicable law. If you are not the intended 
recipient, please note that any use, distribution or copying of this 
communication is unauthorized and may be unlawful. If you have received this 
communication in error, please notify sender immediately and delete this 
communication from your computer.

--
This SF email is sponsosred by:
Try Windows Azure free for 90 days Click Here 
http://p.sf.net/sfu/sfd2d-msazure
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] RunScript and Redirecting output.

2012-04-02 Thread Graham Keeling
On Mon, Apr 02, 2012 at 10:56:43AM +, Rob Becker wrote:
> RunScript {
> RunsWhen  = Before
> Runs On Client = Yes
> Command = "/bin/echo `/bin/hostname`  >
> /usr/local/bacula/working/restore_file"
>Command = "/bin/date +%%F >> /usr/local/bacula/working/restore_file"
>   }
...
> It looks like Bacula just ignores everything after, and including, the
> greater than sign.

I think that the problem is probably that your greater than signs are shell
features.
You could try putting your commands into a shell script and running the script
instead.


--
This SF email is sponsosred by:
Try Windows Azure free for 90 days Click Here 
http://p.sf.net/sfu/sfd2d-msazure
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] RunScript and Redirecting output.

2012-04-02 Thread Rob Becker
Hello all,

I'm hoping someone will be able to help me solve a problem that's been causing 
me some frustration over the weekend.  I'm working on a process to automate a 
restore jobs to confirm the validity of the backup jobs.  The restore job will 
restore a single file that gets created during the backup process.  I'm trying 
to use RunScript to create that file and I'm having a very difficult time 
getting RunScript to redirect the output of the command to a file.

Here is the example of what I'm trying to accomplish.

RunScript {
RunsWhen  = Before
Runs On Client = Yes
Command = "/bin/echo `/bin/hostname`  >
/usr/local/bacula/working/restore_file"
   Command = "/bin/date +%%F >> /usr/local/bacula/working/restore_file"
  }


I've tried many different forms of escaping the characters to no avail.  It 
looks like Bacula just ignores everything after, and including, the greater 
than sign.

Does anyone have any ideas? On how I can get Bacula to do redirect output from 
a script to a file?

Thanks.
Rob Becker

Follow 2Checkout! [http://assets.2co.com/legal-disclaimer/facebook.png] 
  
[http://assets.2co.com/legal-disclaimer/twitter.png] 
  
[http://assets.2co.com/legal-disclaimer/linkedin.png] 




CONFIDENTIALITY STATEMENT: All information included in this communication, 
including attachment(s), is intended solely for delivery to and authorized use 
by the addressee(s) identified above, and may contain privileged, confidential, 
proprietary and/or trade secret information entitled to protection and/or 
exempt from disclosure under applicable law. If you are not the intended 
recipient, please note that any use, distribution or copying of this 
communication is unauthorized and may be unlawful. If you have received this 
communication in error, please notify sender immediately and delete this 
communication from your computer.
--
This SF email is sponsosred by:
Try Windows Azure free for 90 days Click Here 
http://p.sf.net/sfu/sfd2d-msazure___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Multi-cores compression

2012-04-02 Thread Alan Brown
On 30/03/12 09:39, Alex Crow wrote:

> We tried removing the compression on some jobs, and we got a great speed
> boost. However, the SSL compression was either absent or minimal, even
> though OpenSSL libs are compiled with zlib:

They probably use Z0 or Z1 for best speed.

if that's the case the tape dirve hardware encryption will still have 
some effect.




--
This SF email is sponsosred by:
Try Windows Azure free for 90 days Click Here 
http://p.sf.net/sfu/sfd2d-msazure
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Añadir varios clientes a un Job

2012-04-02 Thread Juan Pablo Botero
Saludos.

Quisiera saber la forma de añadir varios clientes a un JobDefs?

Gracias.



-- 
Cordialmente:
Juan Pablo Botero
Administrador de Sistemas informáticos
Fedora Ambassador for Colombia
http://www.jpilldev.net
--
This SF email is sponsosred by:
Try Windows Azure free for 90 days Click Here 
http://p.sf.net/sfu/sfd2d-msazure___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users