[Bacula-users] How to get this autochanger work with bacula

2011-01-20 Thread harryl
Thanks.  But after I installed mtx for my Linux server and try label, it gave 
me different error:

*label
Automatically selected Catalog: MyCatalog
Using Catalog "MyCatalog"
Automatically selected Storage: LTO-2
Connecting to Storage daemon LTO-2 at cole:9103 ...
Enter new Volume name: testvol
Enter slot (0 or Enter for none): 
Defined Pools:
 1: Default
 2: File
 3: Scratch
Select the Pool (1-3): 1
Connecting to Storage daemon LTO-2 at cole:9103 ...
Sending label command for Volume "testvol" Slot 0 ...
Invalid slot=0 defined in catalog for Volume "" on "LTO-2-drive" (/dev/nst0). 
Manual load may be required.
3301 Issuing autochanger "loaded? drive 0" command.
3991 Bad autochanger "loaded? drive 0" command: ERR=Child exited with code 1.
Results=mtx: Request Sense: Long Report=yes
mtx: Request Sense: Valid Residual=no
mtx: Request Sense: Error Code=0 (Unknown?!)
mtx: Request Sense: Sense Key=No Sense
mtx: Request Sense: FileMark=no
mtx: Request Sense: EOM=no
mtx: Request Sense: ILI=no
mtx: Request Sense: Additional Sense Code = 00
mtx: Request Sense: Additional Sense Qualifier = 00
mtx: Request Sense: BPV=no
mtx: Request Sense: Error in CDB=no
mtx: Request Sense:3910 Unable to open device "LTO-2-drive" (/dev/nst0): 
ERR=dev.c:491 Unable to open device "LTO-2-drive" (/dev/nst0): ERR=No medium 
found

Label command failed for Volume testvol.
Do not forget to mount the drive!!!

+--
|This was sent by har...@zoomedia.com via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--



--
Protect Your Site and Customers from Malware Attacks
Learn about various malware tactics and how to avoid them. Understand 
malware threats, the impact they can have on your business, and how you 
can protect your company and customers by using code signing.
http://p.sf.net/sfu/oracle-sfdevnl
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] LTO-3 tape Not Compressing data and speed

2011-01-20 Thread Jose Antonio Rodriguez Martin
The files are copied to tape files created by the bacula. Which are
compressed:

Filset:

Include {Options {Signature = MD5; compression = GZIP5}
.

What we have are two sets of backup:
1.-copied to hard disk files Bacula
2.-these same files copy them to tape.

To confirm you have everything I said, I will copy to tape uncompressed
files. Really so certified the 417 GB is the maximum compression that allows
.

The quality of the tapes, I have no way of knowing. The model I bought is
brand Manufacturer: HP. The specific model is:
"HEWLETT-PACKARD C7973AN LTO" are packages of 20 LTO-3 tapes.


2011/1/18 Phil Stracchino 

> On 01/18/11 02:24, Jose Antonio Rodriguez Martin wrote:
> > Nothing, it still does not copy more than 417 GB ...
>
> If you're getting 417GB onto a 400GB tape, you're getting compression.
> You're just not getting MUCH of it.
>
> What kind of data are you backing up?  Not all data compresses well.
> English text compresses about 4:1 on average.  Binaries, typically not
> much at all.  Digital images, digital video, zip archives, MP3 audio?
> Forget it.  Databases?  Sometimes you get lucky and get 10%-15%
> compression, sometimes they don't compress at all.
>
>
> --
>  Phil Stracchino, CDK#2 DoD#299792458 ICBM: 43.5607, -71.355
>  ala...@caerllewys.net   ala...@metrocast.net   p...@co.ordinate.org
> Renaissance Man, Unix ronin, Perl hacker, Free Stater
> It's not the years, it's the mileage.
>
>
> --
> Protect Your Site and Customers from Malware Attacks
> Learn about various malware tactics and how to avoid them. Understand
> malware threats, the impact they can have on your business, and how you
> can protect your company and customers by using code signing.
> http://p.sf.net/sfu/oracle-sfdevnl
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users
>



-- 
Jose Antonio Rodriguez Martin
CGI, S.A.
Teléfono: 607372465
skype: jarm-cgi
e-mail: jarodrig...@cgi.es
--
Protect Your Site and Customers from Malware Attacks
Learn about various malware tactics and how to avoid them. Understand 
malware threats, the impact they can have on your business, and how you 
can protect your company and customers by using code signing.
http://p.sf.net/sfu/oracle-sfdevnl___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Software compression: None

2011-01-20 Thread Martin Simmons
> On Wed, 19 Jan 2011 09:47:32 -0500 (EST), Steve Thompson said:
> 
> On Tue, 18 Jan 2011, Dan Langille wrote:
> 
> > On 1/18/2011 4:16 PM, Steve Thompson wrote:
> >> > Whether software compression happens or not seems to be random. Anyone
> >> know why this is happening?
> >
> > There was a discussion this week about this.  Add Signature to your options.
> 
> I will certainly try that, but I'm not sure that this is the whole story, 
> and in any event I do not want to have to introduce signatures long-term. 
> The issue is that software compression sometimes happens, sometimes not, 
> with no changes in any configuration. Looks like a bug to me.

It reports "None" if there were no files in the backup or if the compression
saved less than 0.5%, so it doesn't necessarily mean that it wasn't attempted.

__Martin

--
Protect Your Site and Customers from Malware Attacks
Learn about various malware tactics and how to avoid them. Understand 
malware threats, the impact they can have on your business, and how you 
can protect your company and customers by using code signing.
http://p.sf.net/sfu/oracle-sfdevnl
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Trouble compiling bacula 5.0.3 under Solaris 10

2011-01-20 Thread Martin Simmons
> On Wed, 19 Jan 2011 17:16:47 -0800, Kenneth Garges said:
> 
> Thanks for your tips. Unfortunately still no joy. 
> 
> Switched to the latest Sun Studio compiler instead of gcc. 
> 
> I made my config script close to yours changing only a couple paths. 
> #! /bin/sh
> # Script from Gary Schmidt for compiling bacula under Solaris 10
> # Entered Wednesday, January 19, 2011 Kenneth Garges 
> # Was
> #PATH=/opt/webstack/bin:/opt/webstack/mysql/bin:/bin:/usr/bin:/opt/SUNWspro/bin:/usr/ccs/bin:/sbin:/usr/sbin:/usr/local/bin:$HOME/src/bacula/depkgs-qt/qt-x11-opensource-src-4.3.4/bin
> # Modified for us
> PATH=/opt/mysql/mysql/bin:/bin:/usr/bin:/opt/SUNWexpo/bin:/usr/ccs/bin:/sbin:/usr/sbin:/usr/local/bin
> export PATH
> ./configure --build=sparc64-sun-solaris2.10 --host=sparc64-sun-solaris2.10 \
>  CC=cc CXX=CC \
>  CFLAGS="-g -O" \
>  LDFLAGS="-L/opt/mysql/lib/mysql -R/opt/mysql/lib/mysql -L/usr/sfw/lib 
> -R/usr/sfw/lib" \ 

The problem is that LDFLAGS=... line has a space at the end, so the newline
isn't escaped.  It therefore loses all of the remaining arguments and also
causes the build, host and target warnings.

You shouldn't need the --build and --host arguments when this is fixed.

__Martin

--
Protect Your Site and Customers from Malware Attacks
Learn about various malware tactics and how to avoid them. Understand 
malware threats, the impact they can have on your business, and how you 
can protect your company and customers by using code signing.
http://p.sf.net/sfu/oracle-sfdevnl
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] fstype=bind being ignored

2011-01-20 Thread Martin Simmons
> On Wed, 19 Jan 2011 15:48:13 +0100, Frank Altpeter said:
> 
> After a short discussion on the channel, I was advised to create a
> post here to see if there's help for that or if it's really a bug.
> 
> I've got a machine that is using bind mounts. Despite the fact that
> the fileset definition is using "onefs=no" and "fstype=ext3", the bind
> mounts' content is saved which results in multiple saves of the same
> content.
> 
> I know I could simply add the relevant mount points on the exclude
> list in the fileset, but I think it would make sense if this is
> considered like other fstype configurations as well, since I don't
> like to manually tweak the fileset on every possible change of the
> bind mounts, and to keep the default fileset simple and generic.
> 
> I've put some addional information on http://racoon.pastebin.com/r44RTxyP
> 
> The bacula-fd has version 2.4.4 on opensuse 11.1 and the server is
> running 5.0.3, on SLES 11.1
> 
> 
> Any hints appreciated.

Bacula implements onefs by looking for changes in the stat.st_dev (Device)
field and implements fstype by calling statfs.  Unfortunately, both of these
return the values associated with the target directory for bind mounts, so the
mount point looks like a normal directory to Bacula.

You can see that by comparing the output printed by for / and /backup:

stat / /backup
stat -f / /backup

I don't know if that counts as a bug or not.

__Martin

--
Protect Your Site and Customers from Malware Attacks
Learn about various malware tactics and how to avoid them. Understand 
malware threats, the impact they can have on your business, and how you 
can protect your company and customers by using code signing.
http://p.sf.net/sfu/oracle-sfdevnl
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] SD Losing Track of Pool

2011-01-20 Thread Martin Simmons
> On Tue, 18 Jan 2011 08:48:56 -0700, Peter Zenge said:
> 
> A couple days ago somebody made a comment that using pool overrides in a
> schedule was deprecated.  I've been using them for years, but I've been
> seeing a strange problem recently that I'm thinking might be related.
> 
> I'm running 5.0.2 on Debian, separate Dir/Mysql and SD systems, using files
> on an array.  I'm backing up several TB a week, but over a slow 25Mbps link,
> so some of my full jobs run for a very long time.  Concurrency is key.  I
> normally run 4 jobs at a time on my SD, and I spool (yes, probably
> unnecessary, but because the data is coming in so slowly, I feel better
> about writing it to volumes in big chunks.)
> 
> Right now I have one job actively running, with 4 more waiting on the SD.
> As I mentioned before, usually 4 are running concurrently, but I frequently
> see less than 4 but have never really dug into it.  In the output below,
> note that the SD is running 4 (actually 5!) jobs, but only one is actually
> writing to the spool.  Two things jump out at me here: First, of the 5
> running jobs, two are correctly noted as being for LF-Full, and 3 for LF-Inc
> (pool for Full backups and pool for Incremental backups respectively).
> However, all 5 show the same volume (LF-F-0239, which is only in the LF-Full
> pool, and is currently being written to by the correctly-running job).
> Second, in the Device Status section at the bottom, the pool of LF-F-0239 is
> listed as "*unknown*"; similarly, under "Jobs waiting to reserve a drive",
> each job wants the correct pool, but the current pool is listed as "".

The reporting of pools in the SD might be a little wrong, because it doesn't
really have that information, but I think the fundamental problem is that you
only have one SD device.  That is limiting concurrency because an SD device
can only mount one volume at a time (even for file devices).

__Martin

--
Protect Your Site and Customers from Malware Attacks
Learn about various malware tactics and how to avoid them. Understand 
malware threats, the impact they can have on your business, and how you 
can protect your company and customers by using code signing.
http://p.sf.net/sfu/oracle-sfdevnl
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Software compression: None

2011-01-20 Thread Steve Thompson
On Thu, 20 Jan 2011, Martin Simmons wrote:

> It reports "None" if there were no files in the backup or if the compression
> saved less than 0.5%, so it doesn't necessarily mean that it wasn't attempted.

I understand that, but I have several file sets that, for a full backup 
level, sometimes give in the region of 60% compression and sometimes none, 
depending on wind direction. It seems to be about 50/50 whether 
compression is used or not.

Steve

--
Protect Your Site and Customers from Malware Attacks
Learn about various malware tactics and how to avoid them. Understand 
malware threats, the impact they can have on your business, and how you 
can protect your company and customers by using code signing.
http://p.sf.net/sfu/oracle-sfdevnl
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] How to get this autochanger work with bacula

2011-01-20 Thread Dan Langille
On 1/20/2011 3:09 AM, harryl wrote:
> Thanks.  But after I installed mtx for my Linux server and try label, it gave 
> me different error:

Getting the autochanger wroking can be a long task.  I documented my 
steps.  Start with:

  http://www.freebsddiary.org/tape-library.php

And then perhaps:

  http://www.freebsddiary.org/tape-library-integration.php

Make sure each command works properly before moving on.

-- 
Dan Langille - http://langille.org/

--
Protect Your Site and Customers from Malware Attacks
Learn about various malware tactics and how to avoid them. Understand 
malware threats, the impact they can have on your business, and how you 
can protect your company and customers by using code signing.
http://p.sf.net/sfu/oracle-sfdevnl
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Software compression: None

2011-01-20 Thread Dan Langille
On 1/20/2011 7:24 AM, Steve Thompson wrote:
> On Thu, 20 Jan 2011, Martin Simmons wrote:
>
>> It reports "None" if there were no files in the backup or if the compression
>> saved less than 0.5%, so it doesn't necessarily mean that it wasn't 
>> attempted.
>
> I understand that, but I have several file sets that, for a full backup
> level, sometimes give in the region of 60% compression and sometimes none,
> depending on wind direction. It seems to be about 50/50 whether
> compression is used or not.

Time for new eyes.  Post the job emails.

-- 
Dan Langille - http://langille.org/

--
Protect Your Site and Customers from Malware Attacks
Learn about various malware tactics and how to avoid them. Understand 
malware threats, the impact they can have on your business, and how you 
can protect your company and customers by using code signing.
http://p.sf.net/sfu/oracle-sfdevnl
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Software compression: None

2011-01-20 Thread Steve Thompson
On Thu, 20 Jan 2011, Dan Langille wrote:

> On 1/20/2011 7:24 AM, Steve Thompson wrote:
>> On Thu, 20 Jan 2011, Martin Simmons wrote:
>> 
>>> It reports "None" if there were no files in the backup or if the 
>>> compression
>>> saved less than 0.5%, so it doesn't necessarily mean that it wasn't 
>>> attempted.
>> 
>> I understand that, but I have several file sets that, for a full backup
>> level, sometimes give in the region of 60% compression and sometimes none,
>> depending on wind direction. It seems to be about 50/50 whether
>> compression is used or not.
>
> Time for new eyes.  Post the job emails.

I'm re-running all of the jobs with a signature added. Will post in a 
couple of days when it's done.

Steve

--
Protect Your Site and Customers from Malware Attacks
Learn about various malware tactics and how to avoid them. Understand 
malware threats, the impact they can have on your business, and how you 
can protect your company and customers by using code signing.
http://p.sf.net/sfu/oracle-sfdevnl
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Software compression: None

2011-01-20 Thread Steve Thompson
On Thu, 20 Jan 2011, Dan Langille wrote:

> Time for new eyes.  Post the job emails.

One full backup completed. Here are the relevant definitions:

Job {
   Name = "bear_data15"
   JobDefs = "defjob"
   Pool = Pool_bear_data15
   Write Bootstrap = "/var/lib/bacula/bear_data15.bsr"
   Client = bear-fd
   FileSet = "bear_data15"
   Schedule = "Saturday1"
}

FileSet {
   Name = "bear_data15"
   Include {
 Options {
   compression = GZIP
   signature = MD5
   sparse = yes
   noatime = yes
 }
 Options {
   exclude = yes
   wilddir = "*/.NetBin"
   wilddir = "*/.Trash"
   wilddir = "*/.nbs"
   wilddir = "*/.maildir/.spam"
 }
 File = /mnt/bear/data15
   }
}

Storage {
   Name = Storage_bear_data15
   Address = 
   SD Port = 9103
   Password = ""
   Device = Data_bear_data15
   Media Type = Media_bear_data15
   Maximum Concurrent Jobs = 1
   TLS Enable = yes
   TLS Require = Yes
   TLS CA Certificate File = 
   TLS Certificate = 
   TLS Key = 
}

Pool {
   Name = Pool_bear_data15
   Storage = Storage_bear_data15
   Pool Type = Backup
   Recycle = yes
   Recycle Oldest Volume = yes
   Auto Prune = yes
   Volume Retention = 6 weeks
   Maximum Volumes = 300
   Maximum Volume Bytes = 4g
   Label Format = "bear_data15-"
}

and the lastest job e-mail:

19-Jan 22:00 cbe-dir JobId 8749: No prior Full backup Job record found.
19-Jan 22:00 cbe-dir JobId 8749: No prior or suitable Full backup found in 
catalog. Doing FULL backup.
19-Jan 22:30 cbe-dir JobId 8749: Start Backup JobId 8749, 
Job=bear_data15.2011-01-19_22.00.01_22
19-Jan 22:31 cbe-dir JobId 8749: Created new Volume "bear_data15-21646" in 
catalog.
19-Jan 22:31 cbe-dir JobId 8749: Using Device "Data_bear_data15"
19-Jan 22:31 backup1-sd JobId 8749: Labeled new Volume "bear_data15-21646" on 
device "Data_bear_data15" (/mnt/backup1/data5).
...
20-Jan 05:52 backup1-sd JobId 8749: Job write elapsed time = 07:20:18, Transfer 
rate = 14.68 M Bytes/second
20-Jan 05:52 cbe-dir JobId 8749: Bacula cbe-dir 5.0.2 (28Apr10): 20-Jan-2011 
05:52:43
   Build OS:   x86_64-redhat-linux-gnu redhat
   JobId:  8749
   Job:bear_data15.2011-01-19_22.00.01_22
   Backup Level:   Full (upgraded from Incremental)
   Client: "bear-fd" 5.0.2 (28Apr10) 
x86_64-redhat-linux-gnu,redhat,
   FileSet:"bear_data15" 2011-01-19 22:00:01
   Pool:   "Pool_bear_data15" (From Job resource)
   Catalog:"BackupCatalog" (From Client resource)
   Storage:"Storage_bear_data15" (From Pool resource)
   Scheduled time: 19-Jan-2011 22:00:01
   Start time: 19-Jan-2011 22:31:00
   End time:   20-Jan-2011 05:52:43
   Elapsed time:   7 hours 21 mins 43 secs
   Priority:   10
   FD Files Written:   171,826
   SD Files Written:   171,826
   FD Bytes Written:   387,918,677,223 (387.9 GB)
   SD Bytes Written:   387,949,527,809 (387.9 GB)
   Rate:   14636.8 KB/s
   Software Compression:   None
   VSS:no
   Encryption: no
   Accurate:   no
   Volume name(s): 
   Volume Session Id:  124
   Volume Session Time:1295305183
   Last Volume Bytes:  1,695,092,917 (1.695 GB)
   Non-fatal FD errors:0
   SD Errors:  0
   FD termination status:  OK
   SD termination status:  OK
   Termination:Backup OK

20-Jan 05:52 cbe-dir JobId 8749: Begin pruning Jobs older than 1 month 5 days .
20-Jan 05:52 cbe-dir JobId 8749: No Jobs found to prune.
20-Jan 05:52 cbe-dir JobId 8749: Begin pruning Jobs.
20-Jan 05:52 cbe-dir JobId 8749: No Files found to prune.
20-Jan 05:52 cbe-dir JobId 8749: End auto prune.

The bacula installation was created from RPM's that I built myself from 
the source RPM (bacula-5.0.2-1.src.rpm), and zlib is included:

#  ldd /usr/sbin/bacula-fd| grep libz
 libz.so.1 => /usr/lib64/libz.so.1 (0x003731c0)

-Steve

--
Protect Your Site and Customers from Malware Attacks
Learn about various malware tactics and how to avoid them. Understand 
malware threats, the impact they can have on your business, and how you 
can protect your company and customers by using code signing.
http://p.sf.net/sfu/oracle-sfdevnl
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Problem with duplicate files in catalog

2011-01-20 Thread Mark Round
Hi all,

I have now tried this on a network client running the same version of Bacula 
(5.0.2), and I still get this problem. However, if I compare this to a local 
job (e.g. the Bacula server is backing itself up), it doesn't happen: files 
only appear once in the catalog.

I thought it might be a time-related issue, but I can confirm that all systems 
involved are synched to the same NTP servers.

Does anyone else have any pointers or ideas as to what might be going wrong ?

Many thanks,

-Mark


On 18 Jan 2011, at 09:40, Mark Round wrote:


Hi all,
 
I have a strange issue with one of my Bacula servers, as it seems to be backing 
up files on my clients twice during a session. I first noticed this on Bacula 
2.4.4 (Debian Lenny), but have just upgraded the server to 5.0.2 (Debian 
Squeeze), and the problem persists. I first noticed it because the disk volumes 
(I'm using HD backup, not to tape) for full backups were twice the size of the 
actual systems being backed up - e.g. a system using 4GB ended up with an 8GB 
backup volume.
 
When I examine the backups with BAT, I can see multiple copies of the same file 
for each backup job. For example, I have attached a screenshot showing a file 
(/var/log/messages) on one system - you can see different file Ids, but it's 
included twice. I thought it might be a catalog issue, so as an extreme 
measure, I dropped the catalog DB (MySQL), recreated empty tables and started 
everything from scratch again. The problem persisted, and I still have files 
being backed up twice.
 
Just wondering if anyone can shed some light on this issue ? Apart from files 
being included twice, everything else seems to work OK... I will of course post 
a follow up with any solutions!
 
Regards,  
 
-Mark
 


--
Protect Your Site and Customers from Malware Attacks
Learn about various malware tactics and how to avoid them. Understand 
malware threats, the impact they can have on your business, and how you 
can protect your company and customers by using code signing.
http://p.sf.net/sfu/oracle-sfdevnl
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Problem with duplicate files in catalog

2011-01-20 Thread Mark Round
Fixed it!

Turns out that there was a duplicate Include directive inside one of the 
FileSets, causing things to be backed up twice. I removed this, and now things 
are looking much better!

-Mark


On 18 Jan 2011, at 09:40, Mark Round wrote:


Hi all,
 
I have a strange issue with one of my Bacula servers, as it seems to be backing 
up files on my clients twice during a session. I first noticed this on Bacula 
2.4.4 (Debian Lenny), but have just upgraded the server to 5.0.2 (Debian 
Squeeze), and the problem persists. I first noticed it because the disk volumes 
(I'm using HD backup, not to tape) for full backups were twice the size of the 
actual systems being backed up - e.g. a system using 4GB ended up with an 8GB 
backup volume.
 
When I examine the backups with BAT, I can see multiple copies of the same file 
for each backup job. For example, I have attached a screenshot showing a file 
(/var/log/messages) on one system - you can see different file Ids, but it's 
included twice. I thought it might be a catalog issue, so as an extreme 
measure, I dropped the catalog DB (MySQL), recreated empty tables and started 
everything from scratch again. The problem persisted, and I still have files 
being backed up twice.
 
Just wondering if anyone can shed some light on this issue ? Apart from files 
being included twice, everything else seems to work OK... I will of course post 
a follow up with any solutions!
 
Regards,  
 
-Mark
 


--
Protect Your Site and Customers from Malware Attacks
Learn about various malware tactics and how to avoid them. Understand 
malware threats, the impact they can have on your business, and how you 
can protect your company and customers by using code signing.
http://p.sf.net/sfu/oracle-sfdevnl
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] How to get this autochanger work with bacula

2011-01-20 Thread John Drescher
On Thu, Jan 20, 2011 at 3:09 AM, harryl  wrote:
> Thanks.  But after I installed mtx for my Linux server and try label, it gave 
> me different error:
>
> *label
> Automatically selected Catalog: MyCatalog
> Using Catalog "MyCatalog"
> Automatically selected Storage: LTO-2
> Connecting to Storage daemon LTO-2 at cole:9103 ...
> Enter new Volume name: testvol
> Enter slot (0 or Enter for none):
> Defined Pools:
>     1: Default
>     2: File
>     3: Scratch
> Select the Pool (1-3): 1
> Connecting to Storage daemon LTO-2 at cole:9103 ...
> Sending label command for Volume "testvol" Slot 0 ...
> Invalid slot=0 defined in catalog for Volume "" on "LTO-2-drive" (/dev/nst0). 
> Manual load may be required.
> 3301 Issuing autochanger "loaded? drive 0" command.
> 3991 Bad autochanger "loaded? drive 0" command: ERR=Child exited with code 1.
> Results=mtx: Request Sense: Long Report=yes
> mtx: Request Sense: Valid Residual=no
> mtx: Request Sense: Error Code=0 (Unknown?!)
> mtx: Request Sense: Sense Key=No Sense
> mtx: Request Sense: FileMark=no
> mtx: Request Sense: EOM=no
> mtx: Request Sense: ILI=no
> mtx: Request Sense: Additional Sense Code = 00
> mtx: Request Sense: Additional Sense Qualifier = 00
> mtx: Request Sense: BPV=no
> mtx: Request Sense: Error in CDB=no
> mtx: Request Sense:3910 Unable to open device "LTO-2-drive" (/dev/nst0): 
> ERR=dev.c:491 Unable to open device "LTO-2-drive" (/dev/nst0): ERR=No medium 
> found
>

Two things.
1. It appears that the autochanger did not load the tape. You need to
track this down before you can continue. I would start at Dan's blog.
Also there is an autochanger testing section in the manual and a
testing tool for that as well.

2. If you have an autochanger with a barcode reader you do not run
label. Instead run label barcodes and if you want put all tapes in the
Scratch pool.

John

--
Protect Your Site and Customers from Malware Attacks
Learn about various malware tactics and how to avoid them. Understand 
malware threats, the impact they can have on your business, and how you 
can protect your company and customers by using code signing.
http://p.sf.net/sfu/oracle-sfdevnl
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] LTO-3 tape Not Compressing data and speed

2011-01-20 Thread John Drescher
> The files are copied to tape files created by the bacula. Which are
> compressed:
>

So then what you are seeing is the expected result. Remember you
generally do not get good compression ratios on a second (or
subsequent) compression of a data set. After the first pass most of
the redundancy has been removed.

John

--
Protect Your Site and Customers from Malware Attacks
Learn about various malware tactics and how to avoid them. Understand 
malware threats, the impact they can have on your business, and how you 
can protect your company and customers by using code signing.
http://p.sf.net/sfu/oracle-sfdevnl
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Very low performance with compression and encryption !

2011-01-20 Thread Hugo Letemplier
Hi,

I am running bacula 5.0.3 on CentOS 5.6.

When I run a simple Job I can have rates between 20 or 40 MB/s over a
Gigabyte network but when I am running this job with client encryption
and compression everything become slow below 5 MB/s and sometimes
under 500 KB/s.
Generally I test with a backup job on a full system and then do Inc.
Both are very slow.
What should I check ?
Maybe it's my file set ?
I used both MacOSX client and CentOS clients with equivalent slowness.
Does anybody got this problem ?
Maybe it can come from zlib or openssl that I use ?
I am practically sure it's coming from the File Daemon.
I use compression at 3/10 level

Thank you very much

Hugo

I joined the config I used to generate the RPM :
#!/bin/sh
cat << __EOC__
$ ./configure  '--host=i686-redhat-linux-gnu'
'--build=i686-redhat-linux-gnu' '--target=i386-redhat-linux'
'--program-prefix=' '--exec-prefix=/usr' '--bindir=/usr/bin'
'--sysconfdir=/etc' '--datadir=/usr/share' '--includedir=/usr/include'
'--libdir=/usr/lib' '--libexecdir=/usr/libexec' '--localstatedir=/var'
'--sharedstatedir=/usr/com' '--infodir=/usr/share/info'
'--prefix=/usr' '--sbindir=/usr/sbin' '--sysconfdir=/etc/bacula'
'--mandir=/usr/share/man' '--with-scriptdir=/usr/lib/bacula'
'--with-working-dir=/var/lib/bacula'
'--with-plugindir=/usr/lib/bacula' '--with-pid-dir=/var/run'
'--with-subsys-dir=/var/lock/subsys' '--enable-smartalloc'
'--disable-gnome' '--disable-bwx-console' '--disable-tray-monitor'
'--disable-conio' '--enable-readline' '--with-postgresql'
'--disable-bat' '--with-dir-user=bacula' '--with-dir-group=bacula'
'--with-sd-user=bacula' '--with-sd-group=disk' '--with-fd-user=root'
'--with-fd-group=bacula'
'--with-dir-password=XXX_REPLACE_WITH_DIRECTOR_PASSWORD_XXX'
'--with-fd-password=XXX_REPLACE_WITH_CLIENT_PASSWORD_XXX'
'--with-sd-password=XXX_REPLACE_WITH_STORAGE_PASSWORD_XXX'
'--with-mon-dir-password=XXX_REPLACE_WITH_DIRECTOR_MONITOR_PASSWORD_XXX'
'--with-mon-fd-password=XXX_REPLACE_WITH_CLIENT_MONITOR_PASSWORD_XXX'
'--with-mon-sd-password=XXX_REPLACE_WITH_STORAGE_MONITOR_PASSWORD_XXX'
'--with-openssl' 'build_alias=i686-redhat-linux-gnu'
'host_alias=i686-redhat-linux-gnu' 'target_alias=i386-redhat-linux'
'CFLAGS=-O2 -g -m32 -march=i386 -mtune=generic
-fasynchronous-unwind-tables' 'CXXFLAGS=-O2 -g -m32 -march=i386
-mtune=generic -fasynchronous-unwind-tables'

Configuration on Thu Oct 14 13:33:48 CEST 2010:

   Host:i686-redhat-linux-gnu -- redhat
   Bacula version:  Bacula 5.0.3 (30 August 2010)
   Source code location:.
   Install binaries:/usr/sbin
   Install libraries:   /usr/lib
   Install config files:/etc/bacula
   Scripts directory:   /usr/lib/bacula
   Archive directory:   /tmp
   Working directory:   /var/lib/bacula
   PID directory:   /var/run
   Subsys directory:/var/lock/subsys
   Man directory:   /usr/share/man
   Data directory:  /usr/share
   Plugin directory:/usr/lib/bacula
   C Compiler:  gcc 4.1.2
   C++ Compiler:/usr/bin/g++ 4.1.2
   Compiler flags:   -O2 -g -m32 -march=i386 -mtune=generic
-fasynchronous-unwind-tables -fno-strict-aliasing -fno-exceptions
-fno-rtti
   Linker flags:
   Libraries:   -lpthread -ldl
   Statically Linked Tools: no
   Statically Linked FD:no
   Statically Linked SD:no
   Statically Linked DIR:   no
   Statically Linked CONS:  no
   Database type:   PostgreSQL
   Database port:   
   Database lib:-L/usr/lib -lpq -lcrypt
   Database name:   bacula
   Database user:   bacula

   Job Output Email:root@localhost
   Traceback Email: root@localhost
   SMTP Host Address:   localhost

   Director Port:   9101
   File daemon Port:9102
   Storage daemon Port: 9103

   Director User:   bacula
   Director Group:  bacula
   Storage Daemon User: bacula
   Storage DaemonGroup: disk
   File Daemon User:root
   File Daemon Group:   bacula

   SQL binaries Directory   /usr/bin

   Large file support:  yes
   Bacula conio support:no -lreadline -lncurses
   readline support:yes
   TCP Wrappers support:no
   TLS support: yes
   Encryption support:  yes
   ZLIB support:yes
   enable-smartalloc:   yes
   enable-lockmgr:  no
   bat support: no
   enable-gnome:no
   enable-bwx-console:  no
   enable-tray-monitor: no
   client-only: no
   build-dird:  yes
   build-stored:yes
   Plugin support:  yes
   AFS support: no
   ACL support: yes
   XATTR support:   yes
   Python support:  no
   Batch insert enabled:yes


__EOC__


Here is my file set :
FileSet {
Name = "MacFull"
Include {
Options {
HFSPlus Support = yes
Signatu

Re: [Bacula-users] Software compression: None

2011-01-20 Thread Martin Simmons
> On Thu, 20 Jan 2011 07:53:54 -0500 (EST), Steve Thompson said:
> 
> FileSet {
>Name = "bear_data15"
>Include {
>  Options {
>compression = GZIP
>signature = MD5
>sparse = yes
>noatime = yes
>  }
>  Options {
>exclude = yes
>wilddir = "*/.NetBin"
>wilddir = "*/.Trash"
>wilddir = "*/.nbs"
>wilddir = "*/.maildir/.spam"
>  }
>  File = /mnt/bear/data15
>}
> }

This will never compress -- the "default" Options clause needs to the last
one, but you have it as the first one.

__Martin

--
Protect Your Site and Customers from Malware Attacks
Learn about various malware tactics and how to avoid them. Understand 
malware threats, the impact they can have on your business, and how you 
can protect your company and customers by using code signing.
http://p.sf.net/sfu/oracle-sfdevnl
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Very low performance with compression and encryption !

2011-01-20 Thread John Drescher
> I am running bacula 5.0.3 on CentOS 5.6.
>
> When I run a simple Job I can have rates between 20 or 40 MB/s over a
> Gigabyte network but when I am running this job with client encryption
> and compression everything become slow below 5 MB/s and sometimes
> under 500 KB/s.
> Generally I test with a backup job on a full system and then do Inc.
> Both are very slow.
> What should I check ?
> Maybe it's my file set ?
> I used both MacOSX client and CentOS clients with equivalent slowness.
> Does anybody got this problem ?
> Maybe it can come from zlib or openssl that I use ?
> I am practically sure it's coming from the File Daemon.
> I use compression at 3/10 level
>

This is normal. If you want fast compression do not use software
compression and use a tape drive with HW compression like LTO drives.

John

--
Protect Your Site and Customers from Malware Attacks
Learn about various malware tactics and how to avoid them. Understand 
malware threats, the impact they can have on your business, and how you 
can protect your company and customers by using code signing.
http://p.sf.net/sfu/oracle-sfdevnl
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] SD Losing Track of Pool

2011-01-20 Thread Peter Zenge
> From: Martin Simmons [mailto:mar...@lispworks.com]
> Sent: Thursday, January 20, 2011 4:28 AM
> To: bacula-users@lists.sourceforge.net
> Subject: Re: [Bacula-users] SD Losing Track of Pool
> 
> > On Tue, 18 Jan 2011 08:48:56 -0700, Peter Zenge said:
> >
> > A couple days ago somebody made a comment that using pool overrides
> in a
> > schedule was deprecated.  I've been using them for years, but I've
> been
> > seeing a strange problem recently that I'm thinking might be related.
> >
> > I'm running 5.0.2 on Debian, separate Dir/Mysql and SD systems, using
> files
> > on an array.  I'm backing up several TB a week, but over a slow
> 25Mbps link,
> > so some of my full jobs run for a very long time.  Concurrency is
> key.  I
> > normally run 4 jobs at a time on my SD, and I spool (yes, probably
> > unnecessary, but because the data is coming in so slowly, I feel
> better
> > about writing it to volumes in big chunks.)
> >
> > Right now I have one job actively running, with 4 more waiting on the
> SD.
> > As I mentioned before, usually 4 are running concurrently, but I
> frequently
> > see less than 4 but have never really dug into it.  In the output
> below,
> > note that the SD is running 4 (actually 5!) jobs, but only one is
> actually
> > writing to the spool.  Two things jump out at me here: First, of the
> 5
> > running jobs, two are correctly noted as being for LF-Full, and 3 for
> LF-Inc
> > (pool for Full backups and pool for Incremental backups
> respectively).
> > However, all 5 show the same volume (LF-F-0239, which is only in the
> LF-Full
> > pool, and is currently being written to by the correctly-running
> job).
> > Second, in the Device Status section at the bottom, the pool of LF-F-
> 0239 is
> > listed as "*unknown*"; similarly, under "Jobs waiting to reserve a
> drive",
> > each job wants the correct pool, but the current pool is listed as
> "".
> 
> The reporting of pools in the SD might be a little wrong, because it
> doesn't
> really have that information, but I think the fundamental problem is
> that you
> only have one SD device.  That is limiting concurrency because an SD
> device
> can only mount one volume at a time (even for file devices).
> 
> __Martin
> 

Admittedly I confused the issue by posting an example with two Pools involved.  
Even in that example though, there were jobs using the same pool as the mounted 
volume, and they wouldn't run until the 2 current jobs were done (which 
presumably allowed the SD to re-mount the same volume, set the current mounted 
pool correctly, and then 4 jobs were able to write to that volume concurrently, 
as designed.

I saw this issue two other times that day; each time the SD changed the mounted 
pool from "LF-Inc" to "*unknown*" and that brought concurrency to a screeching 
halt.

Certainly I could bypass this issue by having a dedicated volume and device for 
each backup client, but I have over 50 clients right now and it seems like that 
should be unnecessary.  Is that what other people who write to disk volumes do?

--
Protect Your Site and Customers from Malware Attacks
Learn about various malware tactics and how to avoid them. Understand 
malware threats, the impact they can have on your business, and how you 
can protect your company and customers by using code signing.
http://p.sf.net/sfu/oracle-sfdevnl
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Software compression: None

2011-01-20 Thread Steve Thompson
On Thu, 20 Jan 2011, Martin Simmons wrote:

> This will never compress -- the "default" Options clause needs to the last
> one, but you have it as the first one.

Yes, of course you are correct; thank you. And I've even read that in the 
documentation. And moving the default Options clause to the end of the 
Include does result in compression always being used. So all is well now.

I guess I had better not wonder why I've always had it this way, and _was_ 
getting compresssion 50% of the time. Moving on...

Steve

--
Protect Your Site and Customers from Malware Attacks
Learn about various malware tactics and how to avoid them. Understand 
malware threats, the impact they can have on your business, and how you 
can protect your company and customers by using code signing.
http://p.sf.net/sfu/oracle-sfdevnl
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Fwd: Very low performance with compression and encryption !

2011-01-20 Thread John Drescher
-- Forwarded message --
From: Sean Clark 
Date: Thu, Jan 20, 2011 at 10:41 AM
Subject: Re: [Bacula-users] Very low performance with compression and
encryption !
To: John Drescher 


On 01/20/2011 09:00 AM, John Drescher wrote:
> This is normal. If you want fast compression do not use software
> compression and use a tape drive with HW compression like LTO drives.
>
> John
Not really an option for file/disk devices though.

I've been tempted to experiment with BTRFS using LZO or standard zlib
compression for storing the volumes and see how the performance compares
to having bacula-fd do the compression before sending - I have a
suspicion the former might be better..

For the original question though - not only is this normal, this is
normal for programs other than bacula.  I see the same thing happen when
using compression with scp/ssh (and even "tar -cf - | gzip | nc" ) as
well, for example.  The latency introduced by pausing to compress the
data into each packet unavoidably slows things down, and compression is
really only useful over very slow links (or in bacula's case, only if
you're more worried about disk space than backup speed.)

LZO is a faster, apparently much less cpu-intensive algorithm (though it
does not compress as well as zlib) - it might be that if bacula-fd had
an LZO compression option better throughput might be possible while
still getting some space savings.  Even so, looking into some method of
storage-side compression is likely to give you better performance in my
experience.



-- 
John M. Drescher

--
Protect Your Site and Customers from Malware Attacks
Learn about various malware tactics and how to avoid them. Understand 
malware threats, the impact they can have on your business, and how you 
can protect your company and customers by using code signing.
http://p.sf.net/sfu/oracle-sfdevnl
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Very low performance with compression and encryption !

2011-01-20 Thread John Drescher
>> This is normal. If you want fast compression do not use software
>> compression and use a tape drive with HW compression like LTO drives.
>>
>> John
> Not really an option for file/disk devices though.
>
> I've been tempted to experiment with BTRFS using LZO or standard zlib
> compression for storing the volumes and see how the performance compares
> to having bacula-fd do the compression before sending - I have a
> suspicion the former might be better..
>

Doing the compression at the filesystem level is an idea I have wanted
to try for several years. Hopefully one of the filesystems that
support this becomes stable soon.

John

--
Protect Your Site and Customers from Malware Attacks
Learn about various malware tactics and how to avoid them. Understand 
malware threats, the impact they can have on your business, and how you 
can protect your company and customers by using code signing.
http://p.sf.net/sfu/oracle-sfdevnl
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Very low performance with compression and encryption !

2011-01-20 Thread Sean Clark
On 01/20/2011 10:01 AM, John Drescher wrote:
>> I've been tempted to experiment with BTRFS using LZO or standard zlib
>> compression for storing the volumes and see how the performance compares
>> to having bacula-fd do the compression before sending - I have a
>> suspicion the former might be better..
>>
> Doing the compression at the filesystem level is an idea I have wanted
> to try for several years. Hopefully one of the filesystems that
> support this becomes stable soon.
>
> John
(Oops, thanks - my reply was SUPPOSED to go to the list, not just to you
personally...)

To follow up, I think I WILL try out BTRFS with compression (with
client-side compression switched off) for some experimental backups and
see how it does.  Due to the way our backup system is set up
(continuously growing, with volumes stored on external drives supplied
by the offices who want to be on the backup system), the speed that the
backups can be done is becoming an issue, but we don't have enough space
to shut off compression and still have backups go back far enough. 

I have been (bravely|foolhardily) using BTRFS as my primary filesystem
on my netbook, and on several other of my personal drives with no
problems so far, and I'm confident it's at least stable enough to do
serious experimentation with.  Once I've got it running I'll report back
on how it works.

--
Protect Your Site and Customers from Malware Attacks
Learn about various malware tactics and how to avoid them. Understand 
malware threats, the impact they can have on your business, and how you 
can protect your company and customers by using code signing.
http://p.sf.net/sfu/oracle-sfdevnl
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Very low performance with compression and encryption !

2011-01-20 Thread Paul Mather
On Jan 20, 2011, at 11:01 AM, John Drescher wrote:

>>> This is normal. If you want fast compression do not use software
>>> compression and use a tape drive with HW compression like LTO drives.
>>> 
>>> John
>> Not really an option for file/disk devices though.
>> 
>> I've been tempted to experiment with BTRFS using LZO or standard zlib
>> compression for storing the volumes and see how the performance compares
>> to having bacula-fd do the compression before sending - I have a
>> suspicion the former might be better..
>> 
> 
> Doing the compression at the filesystem level is an idea I have wanted
> to try for several years. Hopefully one of the filesystems that
> support this becomes stable soon.

I've been using ZFS with a compression-enabled fileset for a while now under 
FreeBSD.  It is transparent and reliable.  Looking just now, I'm not getting 
great compression ratios for my backup data: 1.09x.  I am using the 
speed-oriented compression algorithm on this fileset, though, because the 
hardware is relatively puny.  (It is a Bacula test bed.)  Probably I'd get 
better compression if I enabled one of the GZIP levels.

Cheers,

Paul.



--
Protect Your Site and Customers from Malware Attacks
Learn about various malware tactics and how to avoid them. Understand 
malware threats, the impact they can have on your business, and how you 
can protect your company and customers by using code signing.
http://p.sf.net/sfu/oracle-sfdevnl
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Very low performance with compression and encryption !

2011-01-20 Thread Silver Salonen
On Thursday 20 January 2011 19:02:33 Paul Mather wrote:
> On Jan 20, 2011, at 11:01 AM, John Drescher wrote:
> 
> >>> This is normal. If you want fast compression do not use software
> >>> compression and use a tape drive with HW compression like LTO drives.
> >>> 
> >>> John
> >> Not really an option for file/disk devices though.
> >> 
> >> I've been tempted to experiment with BTRFS using LZO or standard zlib
> >> compression for storing the volumes and see how the performance compares
> >> to having bacula-fd do the compression before sending - I have a
> >> suspicion the former might be better..
> >> 
> > 
> > Doing the compression at the filesystem level is an idea I have wanted
> > to try for several years. Hopefully one of the filesystems that
> > support this becomes stable soon.
> 
> I've been using ZFS with a compression-enabled fileset for a while now under 
> FreeBSD.  It is transparent and reliable.  Looking just now, I'm not getting 
> great compression ratios for my backup data: 1.09x.  I am using the 
> speed-oriented compression algorithm on this fileset, though, because the 
> hardware is relatively puny.  (It is a Bacula test bed.)  Probably I'd get 
> better compression if I enabled one of the GZIP levels.

Isn't the low compression ratio because of bacula volume format that "messes 
up" data in FS point of view? The same thing that is a problem in implementing 
(or using an FS-based) deduplication in Bacula.

-- 
Silver

--
Protect Your Site and Customers from Malware Attacks
Learn about various malware tactics and how to avoid them. Understand 
malware threats, the impact they can have on your business, and how you 
can protect your company and customers by using code signing.
http://p.sf.net/sfu/oracle-sfdevnl
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Very low performance with compression and encryption !

2011-01-20 Thread Hugo Letemplier
2011/1/20 Paul Mather :
> On Jan 20, 2011, at 11:01 AM, John Drescher wrote:
>
 This is normal. If you want fast compression do not use software
 compression and use a tape drive with HW compression like LTO drives.

 John
>>> Not really an option for file/disk devices though.
>>>
>>> I've been tempted to experiment with BTRFS using LZO or standard zlib
>>> compression for storing the volumes and see how the performance compares
>>> to having bacula-fd do the compression before sending - I have a
>>> suspicion the former might be better..
>>>
>>
>> Doing the compression at the filesystem level is an idea I have wanted
>> to try for several years. Hopefully one of the filesystems that
>> support this becomes stable soon.
>
> I've been using ZFS with a compression-enabled fileset for a while now under 
> FreeBSD.  It is transparent and reliable.  Looking just now, I'm not getting 
> great compression ratios for my backup data: 1.09x.  I am using the 
> speed-oriented compression algorithm on this fileset, though, because the 
> hardware is relatively puny.  (It is a Bacula test bed.)  Probably I'd get 
> better compression if I enabled one of the GZIP levels.
>
> Cheers,
>
> Paul.
>
>
>
> --
> Protect Your Site and Customers from Malware Attacks
> Learn about various malware tactics and how to avoid them. Understand
> malware threats, the impact they can have on your business, and how you
> can protect your company and customers by using code signing.
> http://p.sf.net/sfu/oracle-sfdevnl
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users
>

I wil try somethings with scheduling for the few next days.
I will come back next week with the results.
Cheers,
Hugo

--
Protect Your Site and Customers from Malware Attacks
Learn about various malware tactics and how to avoid them. Understand 
malware threats, the impact they can have on your business, and how you 
can protect your company and customers by using code signing.
http://p.sf.net/sfu/oracle-sfdevnl
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Software compression: None

2011-01-20 Thread Dan Langille

On Thu, January 20, 2011 10:35 am, Steve Thompson wrote:
> On Thu, 20 Jan 2011, Martin Simmons wrote:
>
>> This will never compress -- the "default" Options clause needs to the
>> last
>> one, but you have it as the first one.
>
> Yes, of course you are correct; thank you. And I've even read that in the
> documentation. And moving the default Options clause to the end of the
> Include does result in compression always being used. So all is well now.
>
> I guess I had better not wonder why I've always had it this way, and _was_
> getting compresssion 50% of the time. Moving on...

I think you'll find it depends on what was backed up.  Reading this page:

http://www.bacula.org/5.0.x-manuals/en/main/main/Configuring_Director.html#SECTION00187

"However, one additional point is that in the case that no match was
found, Bacula will use the options found in the last Options resource. As
a consequence, if you want a particular set of "default" options, you
should put them in an Options resource after any other Options. "

Thus, sometimes your backup matched a previous Options clause, and
sometimes it did not.

-- 
Dan Langille -- http://langille.org/


--
Protect Your Site and Customers from Malware Attacks
Learn about various malware tactics and how to avoid them. Understand 
malware threats, the impact they can have on your business, and how you 
can protect your company and customers by using code signing.
http://p.sf.net/sfu/oracle-sfdevnl
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Very low performance with compression and encryption !

2011-01-20 Thread Dan Langille

On Thu, January 20, 2011 12:28 pm, Silver Salonen wrote:
> On Thursday 20 January 2011 19:02:33 Paul Mather wrote:
>> On Jan 20, 2011, at 11:01 AM, John Drescher wrote:
>>
>> >>> This is normal. If you want fast compression do not use software
>> >>> compression and use a tape drive with HW compression like LTO
>> drives.
>> >>>
>> >>> John
>> >> Not really an option for file/disk devices though.
>> >>
>> >> I've been tempted to experiment with BTRFS using LZO or standard zlib
>> >> compression for storing the volumes and see how the performance
>> compares
>> >> to having bacula-fd do the compression before sending - I have a
>> >> suspicion the former might be better..
>> >>
>> >
>> > Doing the compression at the filesystem level is an idea I have wanted
>> > to try for several years. Hopefully one of the filesystems that
>> > support this becomes stable soon.
>>
>> I've been using ZFS with a compression-enabled fileset for a while now
>> under FreeBSD.  It is transparent and reliable.  Looking just now, I'm
>> not getting great compression ratios for my backup data: 1.09x.  I am
>> using the speed-oriented compression algorithm on this fileset, though,
>> because the hardware is relatively puny.  (It is a Bacula test bed.)
>> Probably I'd get better compression if I enabled one of the GZIP levels.
>
> Isn't the low compression ratio because of bacula volume format that
> "messes up" data in FS point of view? The same thing that is a problem in
> implementing (or using an FS-based) deduplication in Bacula.

I also use ZFS on FreeBSD.  Perhaps the above is a typo.  I get nearly 2.0
compression ratio.

$ zfs get et compressratio
NAME  PROPERTY   VALUE  SOURCE
storage   compressratio  1.89x  -
storage/compressedcompressratio  1.90x  -
storage/compressed/bacula compressratio  1.90x  -
storage/compressed/bacula@2010.10.19  compressratio  1.91x  -
storage/compressed/bacula@2010.10.20  compressratio  1.91x  -
storage/compressed/bacula@2010.10.20a compressratio  1.91x  -
storage/compressed/bacula@2010.10.20b compressratio  1.91x  -
storage/compressed/bac...@pre.pool.merge  compressratio  1.94x  -
storage/compressed/home   compressratio  1.00x  -
storage/pgsql compressratio  1.00x  -

-- 
Dan Langille -- http://langille.org/


--
Protect Your Site and Customers from Malware Attacks
Learn about various malware tactics and how to avoid them. Understand 
malware threats, the impact they can have on your business, and how you 
can protect your company and customers by using code signing.
http://p.sf.net/sfu/oracle-sfdevnl
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] SD Losing Track of Pool

2011-01-20 Thread Martin Simmons
> On Thu, 20 Jan 2011 08:18:35 -0700, Peter Zenge said:
> 
> Admittedly I confused the issue by posting an example with two Pools
> involved.  Even in that example though, there were jobs using the same pool
> as the mounted volume, and they wouldn't run until the 2 current jobs were
> done (which presumably allowed the SD to re-mount the same volume, set the
> current mounted pool correctly, and then 4 jobs were able to write to that
> volume concurrently, as designed.
> 
> I saw this issue two other times that day; each time the SD changed the
> mounted pool from "LF-Inc" to "*unknown*" and that brought concurrency to a
> screeching halt.

Sorry, I see what you mean now -- 18040 should be running.  Did it run
eventually, without intervention?

I can't see why the pool name has been set to unknown.

__Martin

--
Protect Your Site and Customers from Malware Attacks
Learn about various malware tactics and how to avoid them. Understand 
malware threats, the impact they can have on your business, and how you 
can protect your company and customers by using code signing.
http://p.sf.net/sfu/oracle-sfdevnl
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] SD Losing Track of Pool

2011-01-20 Thread Steve Ellis
On 1/20/2011 7:18 AM, Peter Zenge wrote:
>>
>>> Second, in the Device Status section at the bottom, the pool of LF-F-
>> 0239 is
>>> listed as "*unknown*"; similarly, under "Jobs waiting to reserve a
>> drive",
>>> each job wants the correct pool, but the current pool is listed as
>> "".
>>
> Admittedly I confused the issue by posting an example with two Pools 
> involved.  Even in that example though, there were jobs using the same pool 
> as the mounted volume, and they wouldn't run until the 2 current jobs were 
> done (which presumably allowed the SD to re-mount the same volume, set the 
> current mounted pool correctly, and then 4 jobs were able to write to that 
> volume concurrently, as designed.
>
> I saw this issue two other times that day; each time the SD changed the 
> mounted pool from "LF-Inc" to "*unknown*" and that brought concurrency to a 
> screeching halt.
>
> Certainly I could bypass this issue by having a dedicated volume and device 
> for each backup client, but I have over 50 clients right now and it seems 
> like that should be unnecessary.  Is that what other people who write to disk 
> volumes do?
I've been seeing this issue myself--it only seems to show up for me if a 
volume change happens during a running backup.  Once that happens, 
parallelism using that device is lost.  For me this doesn't happen too 
often, as I don't have that many parallel jobs, and most of my backups 
are to LTO3, so volume changes don't happen all that often either.  
However, it is annoying.

I thought I had seen something that suggested to me that this issue 
might be fixed in 5.0.3, I've recently switched to 5.0.3, but haven't 
seen any pro or con results yet.

On a somewhat related note, it seemed to me that during despooling, all 
other spooling jobs stop spooling--this might be intentional, I suppose, 
but I think my disk subsystem would be fast enough to keep up one 
despool to LTO3, while other jobs could continue to spool--I could 
certainly understand if no other job using the same device was allowed 
to start despooling during a despool, but that isn't what I observe.

If my observations are correct, it would be nice if this was a 
configurable choice (with faster tape drives, few disk subsystems would 
be able to handle a despool and spooling at the same time)--some of my 
jobs stall long enough when this happens to allow some of my desktop 
backup clients to go to standby--which means those jobs will fail (my 
backup strategy uses Wake-on-LAN to wake them up in the first place).  I 
certainly could spread my jobs out more in time, if necessary, to 
prevent this, but I like for the backups to happen at night when no one 
is likely to be using the systems for anything else.  I guess another 
option would be to launch a keepalive WoL script when a job starts, and 
arrange that the keepalive program be killed when the job completes.

-se

--
Protect Your Site and Customers from Malware Attacks
Learn about various malware tactics and how to avoid them. Understand 
malware threats, the impact they can have on your business, and how you 
can protect your company and customers by using code signing.
http://p.sf.net/sfu/oracle-sfdevnl
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Software compression: None

2011-01-20 Thread Ryan Novosielski
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 01/20/2011 12:35 PM, Dan Langille wrote:
> 
> On Thu, January 20, 2011 10:35 am, Steve Thompson wrote:
>> On Thu, 20 Jan 2011, Martin Simmons wrote:
>>
>>> This will never compress -- the "default" Options clause needs to the
>>> last
>>> one, but you have it as the first one.
>>
>> Yes, of course you are correct; thank you. And I've even read that in the
>> documentation. And moving the default Options clause to the end of the
>> Include does result in compression always being used. So all is well now.
>>
>> I guess I had better not wonder why I've always had it this way, and _was_
>> getting compresssion 50% of the time. Moving on...
> 
> I think you'll find it depends on what was backed up.  Reading this page:
> 
> http://www.bacula.org/5.0.x-manuals/en/main/main/Configuring_Director.html#SECTION00187
> 
> "However, one additional point is that in the case that no match was
> found, Bacula will use the options found in the last Options resource. As
> a consequence, if you want a particular set of "default" options, you
> should put them in an Options resource after any other Options. "
> 
> Thus, sometimes your backup matched a previous Options clause, and
> sometimes it did not.

E-mails like this remind me why having an open source mailing list for
software support beats a vendor any day of the week. To get an answer
like that, I'd have been through several escalations at any company I
currently have to deal with. Bravo!

- -- 
-  _  _ _  _ ___  _  _  _
|Y#| |  | |\/| |  \ |\ |  | |Ryan Novosielski - Sr. Systems Programmer
|$&| |__| |  | |__/ | \| _| |novos...@umdnj.edu - 973/972.0922 (2-0922)
\__/ Univ. of Med. and Dent.|IST/CST-Academic Svcs. - ADMC 450, Newark
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.10 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iEYEARECAAYFAk04c6oACgkQmb+gadEcsb7O6wCghvpY06k3GPbUYnaLsUEf/AO9
/r8AmgOpMLdvjbYPBDNvmcQy0lcXGG9k
=6TT/
-END PGP SIGNATURE-
<>--
Protect Your Site and Customers from Malware Attacks
Learn about various malware tactics and how to avoid them. Understand 
malware threats, the impact they can have on your business, and how you 
can protect your company and customers by using code signing.
http://p.sf.net/sfu/oracle-sfdevnl___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] SD Losing Track of Pool

2011-01-20 Thread Peter Zenge
> -Original Message-
> From: Steve Ellis [mailto:el...@brouhaha.com]
> Sent: Thursday, January 20, 2011 10:39 AM
> To: bacula-users@lists.sourceforge.net
> Subject: Re: [Bacula-users] SD Losing Track of Pool
> 
> On 1/20/2011 7:18 AM, Peter Zenge wrote:
> >>
> >>> Second, in the Device Status section at the bottom, the pool of LF-
> F-
> >> 0239 is
> >>> listed as "*unknown*"; similarly, under "Jobs waiting to reserve a
> >> drive",
> >>> each job wants the correct pool, but the current pool is listed as
> >> "".
> >>
> > Admittedly I confused the issue by posting an example with two Pools
> involved.  Even in that example though, there were jobs using the same
> pool as the mounted volume, and they wouldn't run until the 2 current
> jobs were done (which presumably allowed the SD to re-mount the same
> volume, set the current mounted pool correctly, and then 4 jobs were
> able to write to that volume concurrently, as designed.
> >
> > I saw this issue two other times that day; each time the SD changed
> the mounted pool from "LF-Inc" to "*unknown*" and that brought
> concurrency to a screeching halt.
> >
> > Certainly I could bypass this issue by having a dedicated volume and
> device for each backup client, but I have over 50 clients right now and
> it seems like that should be unnecessary.  Is that what other people
> who write to disk volumes do?
> I've been seeing this issue myself--it only seems to show up for me if
> a
> volume change happens during a running backup.  Once that happens,
> parallelism using that device is lost.  For me this doesn't happen too
> often, as I don't have that many parallel jobs, and most of my backups
> are to LTO3, so volume changes don't happen all that often either.
> However, it is annoying.
> 
> I thought I had seen something that suggested to me that this issue
> might be fixed in 5.0.3, I've recently switched to 5.0.3, but haven't
> seen any pro or con results yet.
> 
> On a somewhat related note, it seemed to me that during despooling, all
> other spooling jobs stop spooling--this might be intentional, I
> suppose,
> but I think my disk subsystem would be fast enough to keep up one
> despool to LTO3, while other jobs could continue to spool--I could
> certainly understand if no other job using the same device was allowed
> to start despooling during a despool, but that isn't what I observe.
> 
> If my observations are correct, it would be nice if this was a
> configurable choice (with faster tape drives, few disk subsystems would
> be able to handle a despool and spooling at the same time)--some of my
> jobs stall long enough when this happens to allow some of my desktop
> backup clients to go to standby--which means those jobs will fail (my
> backup strategy uses Wake-on-LAN to wake them up in the first place).
> I
> certainly could spread my jobs out more in time, if necessary, to
> prevent this, but I like for the backups to happen at night when no one
> is likely to be using the systems for anything else.  I guess another
> option would be to launch a keepalive WoL script when a job starts, and
> arrange that the keepalive program be killed when the job completes.
> 
> -se
> 


Agree about the volume change.  In fact I'm running a backup right now that 
should force a volume change in a couple of hours, and I'm watching the SD 
status to see if the mounted pool becomes unknown around that time.  I have 
certainly noticed that long-running jobs seem to cause this issue, and it 
occurred to me that long-running jobs also have a higher chance of spanning 
volumes.

If that's what I see, then I will upgrade to 5.0.3.  I can do that pretty 
quickly, and will report back...



--
Protect Your Site and Customers from Malware Attacks
Learn about various malware tactics and how to avoid them. Understand 
malware threats, the impact they can have on your business, and how you 
can protect your company and customers by using code signing.
http://p.sf.net/sfu/oracle-sfdevnl
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] SD Losing Track of Pool

2011-01-20 Thread Peter Zenge
> -Original Message-
> From: Martin Simmons [mailto:mar...@lispworks.com]
> Sent: Thursday, January 20, 2011 10:47 AM
> To: bacula-users@lists.sourceforge.net
> Subject: Re: [Bacula-users] SD Losing Track of Pool
> 
> > On Thu, 20 Jan 2011 08:18:35 -0700, Peter Zenge said:
> >
> > Admittedly I confused the issue by posting an example with two Pools
> > involved.  Even in that example though, there were jobs using the
> same pool
> > as the mounted volume, and they wouldn't run until the 2 current jobs
> were
> > done (which presumably allowed the SD to re-mount the same volume,
> set the
> > current mounted pool correctly, and then 4 jobs were able to write to
> that
> > volume concurrently, as designed.
> >
> > I saw this issue two other times that day; each time the SD changed
> the
> > mounted pool from "LF-Inc" to "*unknown*" and that brought
> concurrency to a
> > screeching halt.
> 
> Sorry, I see what you mean now -- 18040 should be running.  Did it run
> eventually, without intervention?
> 
> I can't see why the pool name has been set to unknown.
> 
> __Martin
> 


It did run eventually and without intervention.  And while running the SD did 
show the correct pool.  My problem is that without concurrency I don't get 
efficient use of my available bandwidth, and my backup window (already measured 
in days) is longer than it otherwise needs to be even though the same amount of 
data is backed up.


--
Protect Your Site and Customers from Malware Attacks
Learn about various malware tactics and how to avoid them. Understand 
malware threats, the impact they can have on your business, and how you 
can protect your company and customers by using code signing.
http://p.sf.net/sfu/oracle-sfdevnl
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] comparison network traffic amanada - bacula @restore

2011-01-20 Thread Chris Hoogendyk

On 1/19/11 8:31 PM, Mike Ruskai wrote:
> On 1/19/2011 7:47 PM, Juergen Zahrer wrote:
>> Hi list,
>>
>> amanda passes the whole archive to the client even if only a few files
>> are restored. that takes a very very long time for
>> one small file in a big dump over 100 Mb.
>> what about bacula? does bacula "unpack and extract" the requested file
>> on a local _fast_ disk and transfer that file over
>> network?
>>
>> any explanation would be appreciated:)
> I never used Amanda, on account of the fact that when I looked into it a
> while back, it was incapable of either spanning backups across multiple
> tapes, or storing more than one backup job on any given tape (i.e.
> inflexible to the point of being useless).

I try to stay quiet on this list with regard to Amanda. After all, this is the 
Bacula list. However, 
sometimes it is appropriate to speak up.

Amanda uses native tools such as gnu tar (the typical choice on Linux) and/or 
ufsdump or zfs 
send/receive (on Solaris). How a file is extracted depends on what the 
individual tool is capable of.

However, you *can* do the extraction on the Amanda backup server and transfer 
only the recovered 
file(s) across the network to the client.

The comment, "when I looked into it a while back", would have to have been 
several years ago, since 
tape spanning was implemented at least by version 2.5 (2006). There has been an 
enormous amount of 
development activity on Amanda in the intervening years. Anyone who hasn't 
looked at it since before 
2.5 should at least be aware that specific comments are probably outdated. The 
latest stable release 
is 3.2.1 and there have been many intervening major and minor releases in 
between (see "select your 
version" here http://www.zmanda.com/download-amanda.php).

The question of appending to tapes is a philosophical question addressed here: 
http://wiki.zmanda.com/index.php/FAQ:Why_does_Amanda_not_append_to_a_tape%3F 
(It's fair to differ 
from that point of view).

Many people have good reason to like Bacula, and there has also been a huge 
amount of development 
activity on Bacula over that same span of time. It's good for the community 
that we have both. I 
just think it's important to have a balanced and informed perspective.


-- 
---

Chris Hoogendyk

-
O__   Systems Administrator
   c/ /'_ --- Biology&  Geology Departments
  (*) \(*) -- 140 Morrill Science Center
~~ - University of Massachusetts, Amherst



---

Erdös 4



--
Protect Your Site and Customers from Malware Attacks
Learn about various malware tactics and how to avoid them. Understand 
malware threats, the impact they can have on your business, and how you 
can protect your company and customers by using code signing.
http://p.sf.net/sfu/oracle-sfdevnl
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Very low performance with compression and encryption !

2011-01-20 Thread Paul Mather
On Jan 20, 2011, at 12:44 PM, Dan Langille wrote:

> 
> On Thu, January 20, 2011 12:28 pm, Silver Salonen wrote:
>> On Thursday 20 January 2011 19:02:33 Paul Mather wrote:
>>> On Jan 20, 2011, at 11:01 AM, John Drescher wrote:
>>> 
>> This is normal. If you want fast compression do not use software
>> compression and use a tape drive with HW compression like LTO
>>> drives.
>> 
>> John
> Not really an option for file/disk devices though.
> 
> I've been tempted to experiment with BTRFS using LZO or standard zlib
> compression for storing the volumes and see how the performance
>>> compares
> to having bacula-fd do the compression before sending - I have a
> suspicion the former might be better..
> 
 
 Doing the compression at the filesystem level is an idea I have wanted
 to try for several years. Hopefully one of the filesystems that
 support this becomes stable soon.
>>> 
>>> I've been using ZFS with a compression-enabled fileset for a while now
>>> under FreeBSD.  It is transparent and reliable.  Looking just now, I'm
>>> not getting great compression ratios for my backup data: 1.09x.  I am
>>> using the speed-oriented compression algorithm on this fileset, though,
>>> because the hardware is relatively puny.  (It is a Bacula test bed.)
>>> Probably I'd get better compression if I enabled one of the GZIP levels.
>> 
>> Isn't the low compression ratio because of bacula volume format that
>> "messes up" data in FS point of view? The same thing that is a problem in
>> implementing (or using an FS-based) deduplication in Bacula.
> 
> I also use ZFS on FreeBSD.  Perhaps the above is a typo.  I get nearly 2.0
> compression ratio.
> 
> $ zfs get et compressratio
> NAME  PROPERTY   VALUE  SOURCE
> storage   compressratio  1.89x  -
> storage/compressedcompressratio  1.90x  -
> storage/compressed/bacula compressratio  1.90x  -
> storage/compressed/bacula@2010.10.19  compressratio  1.91x  -
> storage/compressed/bacula@2010.10.20  compressratio  1.91x  -
> storage/compressed/bacula@2010.10.20a compressratio  1.91x  -
> storage/compressed/bacula@2010.10.20b compressratio  1.91x  -
> storage/compressed/bac...@pre.pool.merge  compressratio  1.94x  -
> storage/compressed/home   compressratio  1.00x  -
> storage/pgsql compressratio  1.00x  -


Nope, not a typo:

backup# zfs get compressratio
NAME  PROPERTY   VALUE  SOURCE
backups   compressratio  1.07x  -
backups/baculacompressratio  1.09x  -
backups/hosts compressratio  1.46x  -
backups/san   compressratio  1.06x  -
backups/san@filedrop  compressratio  1.06x  -


The backups/bacula fileset is where my Bacula volumes are stored.  As I 
surmised, I get better compression ratios under GZIP-9 compression:

backup# zfs get compression
NAME  PROPERTY VALUE SOURCE
backups   compression  off   default
backups/baculacompression  onlocal
backups/hosts compression  gzip-9local
backups/san   compression  onlocal
backups/san@filedrop  compression  - -

(Compression="on" equates to "lzjb," which is the most lightweight method, CPU 
resources wise, but not the best in terms of compression ratio achieved.)

I will probably switch the other filesets to GZIP compression, as ZFS 
performance has improved significantly under RELENG_8...

Cheers,

Paul.




--
Protect Your Site and Customers from Malware Attacks
Learn about various malware tactics and how to avoid them. Understand 
malware threats, the impact they can have on your business, and how you 
can protect your company and customers by using code signing.
http://p.sf.net/sfu/oracle-sfdevnl
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Having problems with HP LTO-4 blocksize

2011-01-20 Thread Brian Debelius
Hi,

I have a HP 1760 SAS LTO-4 drive with an LSI 9212 hba under Ubuntu 
10.04, Bacula 5.0.3.

The drive is set for variable blocks.  I cannot get any block size above 
64K to work. 64K works just fine.  Any ideas?

Btape says (using 256K block):

btape: btape.c:1148 Wrote 1 blocks of 262044 bytes.
btape: btape.c:608 Wrote 1 EOF to "Tape" (/dev/nst0)
btape: btape.c:1164 Wrote 1 blocks of 262044 bytes.
btape: btape.c:608 Wrote 1 EOF to "Tape" (/dev/nst0)
btape: btape.c:1206 Rewind OK.
Got EOF on tape.
btape: btape.c:1224 Read block 3815 failed! ERR=Success


bacula-sd.conf:

Device {
   Name = Tape
   Drive Index = 0
   Device Type = Tape
   Archive Device = /dev/nst0
   Automatic Mount = yes
   Removable Media = yes
   Random Access = no
   Media Type = Tape
   Autochanger = no
   Auto Select = yes
   Always Open = yes
   Maximum Block Size = 256K
   Spool Directory = /spool/bacula/
}


Tapeinfo -f /dev/nst0:

Product Type: Tape Drive
Vendor ID: 'HP  '
Product ID: 'Ultrium 4-SCSI  '
Revision: 'U52D'
Attached Changer API: No
SerialNumber: 'HU1028BAW1'
MinBlock: 1
MaxBlock: 16777215
SCSI ID: 0
SCSI LUN: 0
Ready: yes
BufferedMode: yes
Medium Type: Not Loaded
Density Code: 0x44
BlockSize: 0
DataCompEnabled: yes
DataCompCapable: yes
DataDeCompEnabled: yes
CompType: 0x1
DeCompType: 0x1
BOP: yes
Block Position: 0
Partition 0 Remaining Kbytes: 399690
Partition 0 Size in Kbytes: 399690
ActivePartition: 0
EarlyWarningSize: 0
NumPartitions: 0
MaxPartitions: 0

mt -f /dev/nst0 status:

SCSI 2 tape drive:
File number=0, block number=0, partition=0.
Tape block size 0 bytes. Density code 0x44 (LTO-3).
Soft error count since last status=0
General status bits on (4101):
  BOT ONLINE IM_REP_EN



--
Protect Your Site and Customers from Malware Attacks
Learn about various malware tactics and how to avoid them. Understand 
malware threats, the impact they can have on your business, and how you 
can protect your company and customers by using code signing.
http://p.sf.net/sfu/oracle-sfdevnl
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] How to get this autochanger work with bacula

2011-01-20 Thread harryl
John and Dan, I LOVE you guys!!

Dan, your instructions are killers.

Finally got my bacula working with the autoloader.

Now its time to learn how to manage tapes and jobs.

Any sites or information I can get from you guys?




Harry

+--
|This was sent by har...@zoomedia.com via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--



--
Protect Your Site and Customers from Malware Attacks
Learn about various malware tactics and how to avoid them. Understand 
malware threats, the impact they can have on your business, and how you 
can protect your company and customers by using code signing.
http://p.sf.net/sfu/oracle-sfdevnl
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Trouble compiling bacula 5.0.3 under Solaris 10

2011-01-20 Thread Kenneth Garges
Bingo! Thanks for your sharp eyes on this one. After this fix it configured and 
made (with only a few compiler warnings).

I'm celebrating by eating a chocolate chip cookie. I'd offer one to both of you 
if you were nearby. Instead I'll just encourage you to do something nice for 
yourself.

Thanks again.



On 20Jan 2011, at 2:53 AM, Martin Simmons wrote:

>> On Wed, 19 Jan 2011 17:16:47 -0800, Kenneth Garges said:
>> 
>> Thanks for your tips. Unfortunately still no joy. 
>> 
>> Switched to the latest Sun Studio compiler instead of gcc. 
>> 
>> I made my config script close to yours changing only a couple paths. 
>> #! /bin/sh
>> # Script from Gary Schmidt for compiling bacula under Solaris 10
>> # Entered Wednesday, January 19, 2011 Kenneth Garges 
>> # Was
>> #PATH=/opt/webstack/bin:/opt/webstack/mysql/bin:/bin:/usr/bin:/opt/SUNWspro/bin:/usr/ccs/bin:/sbin:/usr/sbin:/usr/local/bin:$HOME/src/bacula/depkgs-qt/qt-x11-opensource-src-4.3.4/bin
>> # Modified for us
>> PATH=/opt/mysql/mysql/bin:/bin:/usr/bin:/opt/SUNWexpo/bin:/usr/ccs/bin:/sbin:/usr/sbin:/usr/local/bin
>> export PATH
>> ./configure --build=sparc64-sun-solaris2.10 --host=sparc64-sun-solaris2.10 \
>> CC=cc CXX=CC \
>> CFLAGS="-g -O" \
>> LDFLAGS="-L/opt/mysql/lib/mysql -R/opt/mysql/lib/mysql -L/usr/sfw/lib 
>> -R/usr/sfw/lib" \ 
> 
> The problem is that LDFLAGS=... line has a space at the end, so the newline
> isn't escaped.  It therefore loses all of the remaining arguments and also
> causes the build, host and target warnings.
> 
> You shouldn't need the --build and --host arguments when this is fixed.
> 
> __Martin
> 
> --
> Protect Your Site and Customers from Malware Attacks
> Learn about various malware tactics and how to avoid them. Understand 
> malware threats, the impact they can have on your business, and how you 
> can protect your company and customers by using code signing.
> http://p.sf.net/sfu/oracle-sfdevnl
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users


--
Protect Your Site and Customers from Malware Attacks
Learn about various malware tactics and how to avoid them. Understand 
malware threats, the impact they can have on your business, and how you 
can protect your company and customers by using code signing.
http://p.sf.net/sfu/oracle-sfdevnl
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Autorename of tapes

2011-01-20 Thread Arunav Mandal

I have 30 tapes to rename in the order of LTO5-001 to LTO5-050. Is there any 
way to rename them automatically? The Barcode label is different so I can't use 
label barcodes.
 
 
Arunav.   --
Protect Your Site and Customers from Malware Attacks
Learn about various malware tactics and how to avoid them. Understand 
malware threats, the impact they can have on your business, and how you 
can protect your company and customers by using code signing.
http://p.sf.net/sfu/oracle-sfdevnl___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Autorename of tapes

2011-01-20 Thread Dan Langille

On Thu, January 20, 2011 4:12 pm, Arunav Mandal wrote:
>
> I have 30 tapes to rename in the order of LTO5-001 to LTO5-050. Is there
> any way to rename them automatically? The Barcode label is different so I
> can't use label barcodes.

The best way is write a script.

This might have something for you:

  http://www.freebsddiary.org/digital-tl891.php

Hope that gets you started.


-- 
Dan Langille -- http://langille.org/


--
Protect Your Site and Customers from Malware Attacks
Learn about various malware tactics and how to avoid them. Understand 
malware threats, the impact they can have on your business, and how you 
can protect your company and customers by using code signing.
http://p.sf.net/sfu/oracle-sfdevnl
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Autorename of tapes

2011-01-20 Thread Dan Langille

On Thu, January 20, 2011 4:12 pm, Arunav Mandal wrote:
>
> I have 30 tapes to rename in the order of LTO5-001 to LTO5-050. Is there
> any way to rename them automatically? The Barcode label is different so I
> can't use label barcodes.

My previous past contained the wrong URL.  Try this one instead:

  http://www.freebsddiary.org/tape-testing.php


-- 
Dan Langille -- http://langille.org/


--
Protect Your Site and Customers from Malware Attacks
Learn about various malware tactics and how to avoid them. Understand 
malware threats, the impact they can have on your business, and how you 
can protect your company and customers by using code signing.
http://p.sf.net/sfu/oracle-sfdevnl
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] help w/ tape errors

2011-01-20 Thread Jeremiah D. Jester
I periodically get tape error messages via email and from the bacula log files. 
 Most of them look something like this...

19-Jan 08:59 bacula02-sd JobId 913: Error: block.c:1016 Read error on fd=8 at 
file:blk 0:0 on device "Drive1" (/dev/nst0). ERR=Input/output error.
19-Jan 09:04 bacula02-sd JobId 913: Error: block.c:1016 Read error on fd=8 at 
file:blk 0:0 on device "Drive1" (/dev/nst0). ERR=Input/output error.
19-Jan 09:09 bacula02-sd JobId 913: Error: block.c:1016 Read error on fd=8 at 
file:blk 0:0 on device "Drive1" (/dev/nst0). ERR=Input/output error.

Note: that there is no volume name in the logged line, same as with email.

My question is... How do I find the volume/s that triggered these messages and 
second, how can I test to verify the tape is faulty or not?

Thanks,
JJ


Jeremiah Jester
Informatics Specialist
Microbiology - Katze Lab
206-732-6185

--
Special Offer-- Download ArcSight Logger for FREE (a $49 USD value)!
Finally, a world-class log management solution at an even better price-free!
Download using promo code Free_Logger_4_Dev2Dev. Offer expires 
February 28th, so secure your free ArcSight Logger TODAY! 
http://p.sf.net/sfu/arcsight-sfd2d___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] help w/ tape errors

2011-01-20 Thread John Drescher
2011/1/20 Jeremiah D. Jester :
> I periodically get tape error messages via email and from the bacula log
> files.  Most of them look something like this…
>
> 19-Jan 08:59 bacula02-sd JobId 913: Error: block.c:1016 Read error on fd=8
> at file:blk 0:0 on device "Drive1" (/dev/nst0). ERR=Input/output error.
>
> 19-Jan 09:04 bacula02-sd JobId 913: Error: block.c:1016 Read error on fd=8
> at file:blk 0:0 on device "Drive1" (/dev/nst0). ERR=Input/output error.
>
> 19-Jan 09:09 bacula02-sd JobId 913: Error: block.c:1016 Read error on fd=8
> at file:blk 0:0 on device "Drive1" (/dev/nst0). ERR=Input/output error.
>
>
>
> Note: that there is no volume name in the logged line, same as with email.
>
>
>
> My question is… How do I find the volume/s that triggered these messages and
> second, how can I test to verify the tape is faulty or not?
>

Have you inserted new tapes lately? You will get this error for every
never written tape. The reason is the tape drive prevents you from
reading past where it has written and on never written tapes there is
nothing on the tape. Since bacula first reads a tape to identify it
you will get this error each time you insert a never written tape and
bacula looks at it.

John

--
Special Offer-- Download ArcSight Logger for FREE (a $49 USD value)!
Finally, a world-class log management solution at an even better price-free!
Download using promo code Free_Logger_4_Dev2Dev. Offer expires 
February 28th, so secure your free ArcSight Logger TODAY! 
http://p.sf.net/sfu/arcsight-sfd2d
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] help w/ tape errors

2011-01-20 Thread Jeremiah D. Jester
This is the 2nd time these tapes have been written too since being new. So, 
there is a possibility one or two of them may have slipped through the cracks 
and didn't get written to.  

How do I find out which tape/s were affected?

Jeremiah Jester
Informatics Specialist
Microbiology - Katze Lab
206-732-6185


-Original Message-
From: John Drescher [mailto:dresche...@gmail.com] 
Sent: Thursday, January 20, 2011 3:11 PM
To: Jeremiah D. Jester
Cc: bacula-users@lists.sourceforge.net
Subject: Re: [Bacula-users] help w/ tape errors

2011/1/20 Jeremiah D. Jester :
> I periodically get tape error messages via email and from the bacula 
> log files.  Most of them look something like this.
>
> 19-Jan 08:59 bacula02-sd JobId 913: Error: block.c:1016 Read error on 
> fd=8 at file:blk 0:0 on device "Drive1" (/dev/nst0). ERR=Input/output error.
>
> 19-Jan 09:04 bacula02-sd JobId 913: Error: block.c:1016 Read error on 
> fd=8 at file:blk 0:0 on device "Drive1" (/dev/nst0). ERR=Input/output error.
>
> 19-Jan 09:09 bacula02-sd JobId 913: Error: block.c:1016 Read error on 
> fd=8 at file:blk 0:0 on device "Drive1" (/dev/nst0). ERR=Input/output error.
>
>
>
> Note: that there is no volume name in the logged line, same as with email.
>
>
>
> My question is. How do I find the volume/s that triggered these 
> messages and second, how can I test to verify the tape is faulty or not?
>

Have you inserted new tapes lately? You will get this error for every never 
written tape. The reason is the tape drive prevents you from reading past where 
it has written and on never written tapes there is nothing on the tape. Since 
bacula first reads a tape to identify it you will get this error each time you 
insert a never written tape and bacula looks at it.

John

--
Special Offer-- Download ArcSight Logger for FREE (a $49 USD value)!
Finally, a world-class log management solution at an even better price-free!
Download using promo code Free_Logger_4_Dev2Dev. Offer expires 
February 28th, so secure your free ArcSight Logger TODAY! 
http://p.sf.net/sfu/arcsight-sfd2d
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Autorename of tapes

2011-01-20 Thread Dan Langille
On 1/20/2011 4:28 PM, Dan Langille wrote:
>
> On Thu, January 20, 2011 4:12 pm, Arunav Mandal wrote:
>>
>> I have 30 tapes to rename in the order of LTO5-001 to LTO5-050. Is there
>> any way to rename them automatically? The Barcode label is different so I
>> can't use label barcodes.
>
> My previous past contained the wrong URL.  Try this one instead:
>
>http://www.freebsddiary.org/tape-testing.php

You contacted me offlist.

Look at the script under 'Testing tapes'.  Does that give you any ideas? 
  Use it to write an EOF to each tape.  Then, run 'label barcodes'.

-- 
Dan Langille - http://langille.org/

--
Special Offer-- Download ArcSight Logger for FREE (a $49 USD value)!
Finally, a world-class log management solution at an even better price-free!
Download using promo code Free_Logger_4_Dev2Dev. Offer expires 
February 28th, so secure your free ArcSight Logger TODAY! 
http://p.sf.net/sfu/arcsight-sfd2d
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Choice of DB for catalog

2011-01-20 Thread Dan Langille
On 1/17/2011 4:27 PM, Devin Reade wrote:
> --On Monday, January 17, 2011 10:50:39 AM -0500 Dan Langille
>   wrote:
>
> (In another thread)
>
>> Even though you are doing migrate, this might help, because Migrate and
>> Copy are so similar.
>>
>> http://www.freebsddiary.org/bacula-disk-to-tape.php
>
> I started to scan that document, and I saw this snippet:
>
>  At present, the Catalog can be SQLite (not recommended),
>  MySQL (also, not recommended, but if you have to use it,
>  it's better than SQLite), [...]
>
> Eh?  Have I missed some important point about using MySQL for Bacula's
> catalog, or is this just the author's prejudices speaking?

As the author of the above, and of the PostgreSQL module for Bacula, I 
admit that I am completely and utterly biased.  I have used many 
database over the past 25 years and, by far, I think PostgreSQL is the 
best.  ;)

 > Sure,
> 
> describes things about which one should be aware, but I don't recall
> having seen anywhere a recommendation of Postgres over MySQL (or vice
> versa) or a statement about significant benefits of one vs the other.

It is clearly a religious issue.   Regardless of the database of choice, 
use it and enjoy it. :)


-- 
Dan Langille - http://langille.org/

--
Special Offer-- Download ArcSight Logger for FREE (a $49 USD value)!
Finally, a world-class log management solution at an even better price-free!
Download using promo code Free_Logger_4_Dev2Dev. Offer expires 
February 28th, so secure your free ArcSight Logger TODAY! 
http://p.sf.net/sfu/arcsight-sfd2d
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Choice of DB for catalog

2011-01-20 Thread Devin Reade
--On Thursday, January 20, 2011 09:27:37 PM -0500 Dan Langille
 wrote:

> On 1/17/2011 4:27 PM, Devin Reade wrote:
>> 
>> Eh?  Have I missed some important point about using MySQL for Bacula's
>> catalog, or is this just the author's prejudices speaking?
> 
> As the author of the above, and of the PostgreSQL module for Bacula, I
> admit that I am completely and utterly biased.

I should say that when I wrote that, there was no offense intended;
it was strictly meant as a technical question as the statement was 
made without justification.  I realized after posting it that the
emotional index of "prejudice" is probably a bit high.  "Bias" is a
much nicer word, but didn't come to mind at the time :)

> It is clearly a religious issue.

Ok; I thought it might be, although I read others' responses with
interest as well.

I would, however, recommend that if there is indeed an advantage
to Bacula of using one DB over another, that it might be worth 
mentioning in the installation manual.  It sounds like they're close
enough that the deciding factors would be legacy infrastructure and
operators' experience, however those two being equal it would be
good to know (especially for new users) any other relative advantages.
Maybe this just means linking to the page Rory mentioned, or the
comment (assuming that it's valid) that primary development is
on Postgres.  I'm not going to submit a documentation patch, though,
because having used only one DB type for bacula I don't consider 
myself qualified to write a comparison.

Of course, if there is no significant advantage, you can just ignore
that whole last paragraph :)

I do appreciate all the work that you (and others) have put into bacula,
including the informative and timely answers on this list.

Devin


--
Special Offer-- Download ArcSight Logger for FREE (a $49 USD value)!
Finally, a world-class log management solution at an even better price-free!
Download using promo code Free_Logger_4_Dev2Dev. Offer expires 
February 28th, so secure your free ArcSight Logger TODAY! 
http://p.sf.net/sfu/arcsight-sfd2d
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Choice of DB for catalog

2011-01-20 Thread Ryan Novosielski
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 01/20/2011 11:25 PM, Devin Reade wrote:
> --On Thursday, January 20, 2011 09:27:37 PM -0500 Dan Langille
>  wrote:
> 
>> On 1/17/2011 4:27 PM, Devin Reade wrote:
>>>
>>> Eh?  Have I missed some important point about using MySQL for Bacula's
>>> catalog, or is this just the author's prejudices speaking?
>>
>> As the author of the above, and of the PostgreSQL module for Bacula, I
>> admit that I am completely and utterly biased.
> 
> I should say that when I wrote that, there was no offense intended;
> it was strictly meant as a technical question as the statement was 
> made without justification.  I realized after posting it that the
> emotional index of "prejudice" is probably a bit high.  "Bias" is a
> much nicer word, but didn't come to mind at the time :)
> 
>> It is clearly a religious issue.
> 
> Ok; I thought it might be, although I read others' responses with
> interest as well.
> 
> I would, however, recommend that if there is indeed an advantage
> to Bacula of using one DB over another, that it might be worth 
> mentioning in the installation manual.  It sounds like they're close
> enough that the deciding factors would be legacy infrastructure and
> operators' experience, however those two being equal it would be
> good to know (especially for new users) any other relative advantages.
> Maybe this just means linking to the page Rory mentioned, or the
> comment (assuming that it's valid) that primary development is
> on Postgres.  I'm not going to submit a documentation patch, though,
> because having used only one DB type for bacula I don't consider 
> myself qualified to write a comparison.
> 
> Of course, if there is no significant advantage, you can just ignore
> that whole last paragraph :)
> 
> I do appreciate all the work that you (and others) have put into bacula,
> including the informative and timely answers on this list.

I use MySQL. It works fine, though I did need to add indexes. Of course,
my catalog is only about 400MB, but I've only ever used MySQL for
anything so it was not worth learning anything else.

- -- 
-  _  _ _  _ ___  _  _  _
|Y#| |  | |\/| |  \ |\ |  | |Ryan Novosielski - Sr. Systems Programmer
|$&| |__| |  | |__/ | \| _| |novos...@umdnj.edu - 973/972.0922 (2-0922)
\__/ Univ. of Med. and Dent.|IST/CST-Academic Svcs. - ADMC 450, Newark
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.10 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iEYEARECAAYFAk05GUIACgkQmb+gadEcsb6mngCfZJe1eGgJsDEl7CKUN+rhw6TM
8TgAoLF6kENBthOwp7C6JwGZbtXEb4w1
=eoUp
-END PGP SIGNATURE-
<>--
Special Offer-- Download ArcSight Logger for FREE (a $49 USD value)!
Finally, a world-class log management solution at an even better price-free!
Download using promo code Free_Logger_4_Dev2Dev. Offer expires 
February 28th, so secure your free ArcSight Logger TODAY! 
http://p.sf.net/sfu/arcsight-sfd2d___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users