[Bacula-users] Volume management for offsite backups?

2013-08-21 Thread James Youngman
I'm using Bacula  5.2.6 (packaged by Debian; 5.2.6+dfsg-9) with
vchanger 0.8.6 (compiled myself, not packaged) on a set of removable
SATA drives, and a 4-slot SATA dock.   I think it's stable now (the
latest hurdle was figuring out how to enable the port multiplier
feature on the eSATA port on the storage server).  I haven't been
using it for long enough to be sure that there won't be problems as
things get to the steady-state (e.g. that volumes will actually begin
to expire before I run out of space on the removable drives).   The
drives vary in size a bit (some are 1TB and others are 3TB) and
perhaps that will cause pain later.

Anyway, I now need to get a few more things organised.

1. Manual backups
The use case is for very large data which changes so rarely that a
manual backup is sufficient.  My assumption is that I should define a
job for this which backs the data up to a separate pool, so that I can
manually label the volumes in a vchanger disk & associate them with
that pool, so that none of the scheduled backups need that disk, and
so that it doesn't matter that I temporarily need to use all the slots
in the SATA dock (temporarily displacing the disks used for scheduled
backups) to perform the manual backup.   Any pitfalls (apart from low
space utilisation of the disks for that pool) here?

2. Offsite backups
I'd like to make sure that my backups (especially full backups) are
geographically diverse.   As a minimum, I would like to maintain the
invariant that every machine has at least two full backups not more
than N days old, in separate locations.   Obviously I will need to do
the physical moving of the disks (i.e. removing some disks to a remote
location, manually), but in terms of configuration, how do I get
Bacula to establish and maintain this kind of invariant?   I'd prefer
for this not to require the backups for the "offsite" versions to be
manual, because I'd like to keep the process as fast for the operator
(myself) as possible.   Waiting around for a manual full backup just
so that I can pull the disk and take it out of the building is a
definite no-no (since the backups take long enough for this to be
annoying).  Can I get Bacula to tell me when to remove a specific
removable disk and take it offsite?

3. Physical Disk labelling
I understand how volumes get labelled when they're tapes.   I
understand something about tape barcodes.  But, presumably at some
point Bacula will want a particular volume to be available for a
backup.   What's the most useful way to associate volume names with
physical media so that I can locate the correct physical disk to
insert?Manually specifying a volume name prefix for all the
vchanger volumes on a disk, and physically labelling the disk with
that prefix?   What do you do yourself for labelling removable disks?

Just in case it helps, I attach the relevant Bacula configurations.


bacula-dir.conf
Description: Binary data


bacula-sd.conf
Description: Binary data


vchanger-1.conf
Description: Binary data
--
Introducing Performance Central, a new site from SourceForge and 
AppDynamics. Performance Central is your source for news, insights, 
analysis and resources for efficient Application Performance Management. 
Visit us today!
http://pubads.g.doubleclick.net/gampad/clk?id=48897511&iu=/4140/ostg.clktrk___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] btape vs. dd: Strange behavior on LTO-5

2013-08-21 Thread Andreas Koch
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hi all,

I am stumped by the behavior of an HP LTO-5 drive running on Scientific
Linux 6.4 (Kernel 2.6.32-358.14.1.el6.x86_64) and Bacula 5.2.12.
Specifically, using dd, I can read and write block sizes of 2 MB, but btape
cannot reliably handle anything larger than 128 KB (fails on reading, see
below).

However, when I read the tape that btape supposedly has written two files of
1 blocks each (and then fails after reading 3616 of them) using dd, I
read each of btape-written 1-block ``files'' as _three_ actual tape
files (of 3616+3616+2768=1 blocks). Note that this test was performed
with a block size of 512KB (see Device definition from bacula-sd.conf, below).

I would be grateful for any ideas on how to resolve this. With the smaller
block sizes, the backup is noticeably slower for compressible data (e.g.,
database dumps), so I really would like to move back up to larger block sizes.

Many thanks in advance,
  Andreas Koch

gundabad ~ # btape -c /etc/bacula/bacula-sd.conf /dev/nst0
Tape block granularity is 1024 bytes.
btape: butil.c:290 Using device: "/dev/nst0" for writing.
btape: btape.c:477 open device "LTO-4" (/dev/nst0): OK
*test

=== Write, rewind, and re-read test ===

I'm going to write 1 records and an EOF
then write 1 records and an EOF, then rewind,
and re-read the data to verify that it is correct.

This is an *essential* feature ...

btape: btape.c:1157 Wrote 1 blocks of 524188 bytes.
btape: btape.c:609 Wrote 1 EOF to "LTO-4" (/dev/nst0)
btape: btape.c:1173 Wrote 1 blocks of 524188 bytes.
btape: btape.c:609 Wrote 1 EOF to "LTO-4" (/dev/nst0)
btape: btape.c:1215 Rewind OK.
Got EOF on tape.
btape: btape.c:1233 Read block 3617 failed! ERR=Success
*q
btape: smartall.c:404 Orphaned buffer: btape 280 bytes at 15e55e8 from jcr.c:362
gundabad ~ # mt -f /dev/nst0 rewind
gundabad ~ # dd if=/dev/nst0 of=/dev/null bs=512k
3616+0 records in
3616+0 records out
1895825408 bytes (1.9 GB) copied, 3.7062 s, 512 MB/s
gundabad ~ # dd if=/dev/nst0 of=/dev/null bs=512k
3616+0 records in
3616+0 records out
1895825408 bytes (1.9 GB) copied, 3.7542 s, 505 MB/s
gundabad ~ # dd if=/dev/nst0 of=/dev/null bs=512k
2768+0 records in
2768+0 records out
1451229184 bytes (1.5 GB) copied, 2.88829 s, 502 MB/s
gundabad ~ # dd if=/dev/nst0 of=/dev/null bs=512k
3616+0 records in
3616+0 records out
1895825408 bytes (1.9 GB) copied, 3.75554 s, 505 MB/s
gundabad ~ # dd if=/dev/nst0 of=/dev/null bs=512k
3616+0 records in
3616+0 records out
1895825408 bytes (1.9 GB) copied, 3.75338 s, 505 MB/s
gundabad ~ # dd if=/dev/nst0 of=/dev/null bs=512k
2768+0 records in
2768+0 records out
1451229184 bytes (1.5 GB) copied, 2.88846 s, 502 MB/s
gundabad ~ # dd if=/dev/nst0 of=/dev/null bs=512k
0+0 records in
0+0 records out
0 bytes (0 B) copied, 0.247548 s, 0.0 kB/s

Device {
  Name = LTO-5
  Media Type = LTO-5
  Archive Device = /dev/nst0
  AutomaticMount = yes;   # when device opened, read it
  AlwaysOpen = yes;
  RemovableMedia = yes;
  RandomAccess = no;
  Maximum File Size = 8g;
  Minimum block size = 524288
  Maximum block size = 524288
  Changer Device = /dev/changer
  AutoChanger = yes
  # AHK we want to interrogate the drive, not the changer
  Alert Command = "sh -c 'smartctl -H -l error /dev/sg11'"
  Maximum Spool Size = 3000g
  Spool Directory = /etc/bacula/spooldisk/BaculaSpool
  Maximum Network Buffer Size = 65536
}


-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.13 (GNU/Linux)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iEYEARECAAYFAlIU9tgACgkQk5ta2EV7DoxhcACfWvORwaQARoXzFmJMDhoP95WO
/rsAnRbWyJFdapKKe8lYjF2jS9SHTQfI
=Lu3d
-END PGP SIGNATURE-

--
Introducing Performance Central, a new site from SourceForge and 
AppDynamics. Performance Central is your source for news, insights, 
analysis and resources for efficient Application Performance Management. 
Visit us today!
http://pubads.g.doubleclick.net/gampad/clk?id=48897511&iu=/4140/ostg.clktrk
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] How to prevent large files from being backed up?

2013-08-21 Thread Martin Simmons
> On Tue, 20 Aug 2013 16:03:51 -0400, Phil Stracchino said:
> 
> Now, the above is a bit of a brute-force solution.  I have not
> personally tried this refinement, but I see no reason it should not
> ALSO be possible to create a static fileset with a dynamically
> generated exclude list, something like this.
> 
> FileSet {
>   Name = "Dynamic Exclude Set"
>   Include {
>  Options {
> signature = SHA1
> File  = "|sh -c 'find /home -size +10G'"
> Exclude   = yes
>  }
>  File = /
>  File = /home
>  File = /var
>   }
> }
> 
> This example should result in automatically excluding any file 10GB or
> larger located anywhere under /home.

Unfortunately you can't put File inside the Options clause, so that can't be
used to generate a dynamic exclude list.

You can however add it to an exclude clause like this:

  Exclude {
File = "|sh -c 'find /home -size +10G'"
  }

That will work as long as none of the wild or regex patterns in the options
clauses match the excluded files (unless they are also using Exclude=yes).

__Martin

--
Introducing Performance Central, a new site from SourceForge and 
AppDynamics. Performance Central is your source for news, insights, 
analysis and resources for efficient Application Performance Management. 
Visit us today!
http://pubads.g.doubleclick.net/gampad/clk?id=48897511&iu=/4140/ostg.clktrk
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] LIP reset occurred (0), device removed.

2013-08-21 Thread Iban Cabrillo
Hi,
  Seems that changing the Fibber do the trick. More that 30GB are copied
now to LT05 Tape without any troubles.

Regards, I


2013/8/20 Clark, Patricia A. 

> Possibly the fiber cable?  Can you swap it out and try again?
> I had 20 fiber connections and one of them needed replacing.  It made the
> drive look bad from the server, but the library did not report any error
> conditions with the drive.
>
> Patti Clark
> Linux System Administrator
> Research and Development Systems Support Oak Ridge National Laboratory
>
> From: Iban Cabrillo  cabri...@ifca.unican.es>>
> Date: Tuesday, August 20, 2013 12:45 PM
> To: Radosław Korzeniewski  rados...@korzeniewski.net>>
> Cc: Bacula Users  Bacula-users@lists.sourceforge.net>>
> Subject: Re: [Bacula-users] LIP reset occurred (0), device removed.
>
> Hi,
>   this is the qlogic driver version:
>
>   [1.348682] qla2xxx [:07:00.0]-00fa:1: QLogic Fibre Channed HBA
> Driver: 8.03.07.12-k.
> [1.348686] qla2xxx [:07:00.0]-00fb:1: QLogic QLE2562 - QLogic 8Gb
> FC Dual-port HBA for System x.
> [1.348699] qla2xxx [:07:00.0]-00fc:1: ISP2532: PCIe (2.5GT/s x8) @
> :07:00.0 hdma+ host#=1 fw=5.06.05 (90d5).
> [1.348768] qla2xxx [:07:00.1]-001d: : Found an ISP2532 irq 19
> iobase 0xc9c68000.
> [1.348949] qla2xxx :07:00.1: irq 44 for MSI/MSI-X
> [1.348956] qla2xxx :07:00.1: irq 45 for MSI/MSI-X
> [1.349023] qla2xxx [:07:00.1]-0040:2: Configuring PCI space...
> [1.349028] qla2xxx :07:00.1: setting latency timer to 64
> [1.361424] qla2xxx [:07:00.1]-0061:2: Configure NVRAM parameters...
> [1.369088] qla2xxx [:07:00.1]-0078:2: Verifying loaded RISC code...
> [1.369131] qla2xxx [:07:00.1]-0092:2: Loading via request-firmware.
> [1.400994] qla2xxx [:07:00.1]-00c0:2: Allocate (64 KB) for FCE...
> [1.401064] qla2xxx [:07:00.1]-00c3:2: Allocated (64 KB) EFT ...
> [1.401150] qla2xxx [:07:00.1]-00c5:2: Allocated (1350 KB) for
> firmware dump.
> [1.405503] scsi2 : qla2xxx
> [1.405763] qla2xxx [:07:00.1]-00fa:2: QLogic Fibre Channed HBA
> Driver: 8.03.07.12-k.
> [1.405767] qla2xxx [:07:00.1]-00fb:2: QLogic QLE2562 - QLogic 8Gb
> FC Dual-port HBA for System x.
> [1.405779] qla2xxx [:07:00.1]-00fc:2: ISP2532: PCIe (2.5GT/s x8) @
> :07:00.1 hdma+ host#=2 fw=5.06.05 (90d5).
> ..
> [2.928979] scsi 1:0:0:0: Sequential-Access IBM  ULT3580-TD3
>  93GP PQ: 0 ANSI: 3
> [2.935530] st: Version 20101219, fixed bufsize 32768, s/g segs 256
> [2.935861] st 1:0:0:0: Attached scsi tape st0
> [2.935864] st 1:0:0:0: st0: try direct i/o: yes (alignment 4 B)
> [3.939166] qla2xxx [:07:00.1]-500a:2: LOOP UP detected (8 Gbps).
> [3.997944] scsi 2:0:0:0: Sequential-Access IBM  ULT3580-TD5
>  C7RC PQ: 0 ANSI: 6
> [4.000531] scsi 2:0:0:1: Medium ChangerIBM  03584L32
> B570 PQ: 0 ANSI: 6
> [4.012583] st 2:0:0:0: Attached scsi tape st1
> [4.012586] st 2:0:0:0: st1: try direct i/o: yes (alignment 4 B)
> [4.013947] osst :I: Tape driver with OnStream support version 0.99.4
> [4.013948] osst :I: $Id: osst.c,v 1.73 2005/01/01 21:13:34 wriede Exp $
> [4.014691] SCSI Media Changer driver v0.25
> [4.016286] ch0: type #1 (mt): 0x1+2 [medium transport]
> [4.016289] ch0: type #2 (st): 0x401+1601 [storage]
> [4.016291] ch0: type #3 (ie): 0x301+10 [import/export]
> [4.016293] ch0: type #4 (dt): 0x101+2 [data transfer]
>
>
> [1:0:0:0]tapeIBM  ULT3580-TD3  93GP  /dev/st0
> [2:0:0:0]tapeIBM  ULT3580-TD5  C7RC  /dev/st1
> [2:0:0:1]mediumx IBM  03584L32 B570  /dev/sch0
>
> Some times the reset hapends whwn a few MBs are written  others after a
> couple of hundred MBs but only when we use the LTO5 ([2:0:0:0]tape
>  IBM  ULT3580-TD5  C7RC  /dev/st1 ) the LT03 always works grate.
>
> the linux kernel running is :
> Linux bacula 3.2.0-4-amd64 #1 SMP Debian 3.2.46-1 x86_64 GNU/Linux
>
> and the bacula-sd configuration for LT05 tape:
> Device {
>   # The TSM3100's second t3pe drive
>   Name = ULT3580-TD5
>   Archive Device = /dev/nst1
>   Device Type = Tape
>   Media Type = LTO-5
>   Autochanger = Yes
>   # Changer Device = 
>   Alert Command = "sh -c '/usr/sbin/tapeinfo -f /dev/sg1 | /bin/sed -n
> /TapeAlert/p"
>   Drive Index = 0
>   RemovableMedia = yes
>   Random Access = no
>   Maximum Block Size = 262144
>   Maximum Network Buffer Size = 262144
>   Maximum Spool Size = 20gb
>   Maximum Job Spool Size = 10gb
>   Spool Directory = /backup/spool
>   AutomaticMount = Yes;
> }
>
> root@bacula:~# /usr/sbin/tapeinfo -f /dev/sg1
> Product Type: Tape Drive
> Vendor ID: 'IBM '
> Product ID: 'ULT3580-TD5 '
> Revision: 'C7RC'
> Attached Changer API: No
> SerialNumber: '00078ABB4C'
> MinBlock: 1
> MaxBlock: 8388608
> SCSI ID: 0
> SCSI LUN: 0
> Ready: yes
> BufferedMode: yes
> Medium Type: 0x58
> Density Code: 0x58
> BlockSize: 0
> DataCompEnable