[Bacula-users] Best backup strategy with bacula and autochanger

2007-11-08 Thread S. Kremer
Hi,

i have to backup a nas system with a total size of 6,5TB. At this time the nas 
system only has ~2TB of data that have to backup but in future, the next two 
years data will be increase up to 6TB.

I have installed Debian Etch on the nas system with samba und bacula as backup 
application. I use a Sony StorStation LIB-81 autochanger with 7 200GB/520GB 
AIT4 tapes. In slot 8 there is a cleaning tape.

What is the best backup strategy for this scenario and how to handle the tapes 
with bacula and the autochanger.

I would like backup each day mo-do incrementally/differentially. Each friday 
but the last friday of month i would like do a full backup and even so the last 
friday of month.

Is this a good concept for backup the data or has anyone here a better 
suggestions.


Stefan
-- 
Ist Ihr Browser Vista-kompatibel? Jetzt die neuesten 
Browser-Versionen downloaden: http://www.gmx.net/de/go/browser

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now  http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] If there's a problem, it just hangs

2007-11-08 Thread Chris Howells
[EMAIL PROTECTED] wrote:
 Actually, looks like bacula did something else. It changed the configuration 
 of the tape drive as well. I had it set as one large drive and bacula 
 reconfigured it to segments of 64 drives. 

I'd be surprised if that is due to bacula.

 Must be something funky between bacula and the storage system no? Can bacula 
 talk with a storage device at the configuration level?

No, bacula has no knowledge of such things. All it does is write tape to 
the tape device, and talk to the autochanger via mtx-changer and mtx.

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now  http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] [Bacula-devel] Bacula using one drive in a Vchanger

2007-11-08 Thread Arno Lehmann
Hi Kern,

vchanger is a helper script from Josh Fisher. You can find it by 
looking for the mail Removable Disk HOWTO by Josh in bacula-users. 
He's done a really nice howto, I think...

I could forward that mail, if you prefer.

Regards,

Arno

-- 
Arno Lehmann
IT-Service Lehmann
www.its-lehmann.de

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now  http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] scsi problems

2007-11-08 Thread Michael Galloway
progress with this issue. i submitted a bug report to adaptec and they
finally provided some suggestions to help resolve this. here is what
they recommeded:

   Go to Configure/View Host Adapter Settings.

   If the SCSI Controller does not have the system boot device attached,
   disable the BIOS. On SCSI Controllers with 2 channels, the BIOS of the
   channel that does not have the boot device, can be disabled.

   To do this, go to Advanced Configuration and set SCSI Controller Int
   13 Support to Disabled. If you boot from a SCSI device attached with
   the SCSI controller, leave the SCSI Controller Int 13 Support at
   Enabled.

   Under Advanced Configuration set Domain Validation to Disabled.

   Press Esc to exit.

   Go to SCSI Device Configuration.

   For the SCSI ID of the tape drive or tape library, set Initiate Wide
   Negotiation to No. This will automatically change the Sync Transfer
   Rate to 40MB/s, Packetized to No, QAS to No, and BIOS
   Multiple LUN Support to No. BIOS Multiple LUN Support can be changed
   back to Yes if needed.

   For the SCSI ID of the tape drive or tape library, set Enable
   Disconnection to No.

   For the SCSI ID of the tape drive or tape library, set Send Start Unit
   Command to No.

   Press Esc twice to exit, save the changes.

   Press Esc again, exit the utility and reboot the system.

with these changes implemented, the btape test passes (with a couple of
modifications to bacula-sd.conf) and the autochanger test passes. 

of of curiosity, what are others with LTO-4 using for scsi adapaters?

-- michael

On Mon, Oct 29, 2007 at 09:43:09PM -0400, Michael Galloway wrote:
 seem to be having some scsi problems with btape test. this test is with
 a spectra T50/LTO-4 attached via an LSI LSIU320 controller. i ran 100GB
 of data onto the drive with tar with no issue. but when i run this:
 
 ./btape -c bacula-sd.conf /dev/nst0
 
 test
 
 i get:
 
 *test
 
 === Write, rewind, and re-read test ===
 
 I'm going to write 1000 records and an EOF
 then write 1000 records and an EOF, then rewind,
 and re-read the data to verify that it is correct.
 
 This is an *essential* feature ...
 
 btape: btape.c:827 Wrote 1000 blocks of 64412 bytes.
 btape: btape.c:501 Wrote 1 EOF to LTO4 (/dev/nst0)
 btape: btape.c:843 Wrote 1000 blocks of 64412 bytes.
 btape: btape.c:501 Wrote 1 EOF to LTO4 (/dev/nst0)
 btape: btape.c:852 Rewind OK.
 1000 blocks re-read correctly.
 29-Oct 21:27 btape JobId 0: Error: block.c:995 Read error on fd=3 at file:blk 
 0:1000 on device LTO4 (/dev/nst0). ERR=No such device or address.
 btape: btape.c:864 Read block 1001 failed! ERR=No such device or address
 
 and i the kernel ring buffer log:
 
 st0: Block limits 1 - 16777215 bytes.
 mptbase: ioc0: LogInfo(0x11010f00): F/W: bug! MID not found
 mptbase: ioc0: LogInfo(0x11010f00): F/W: bug! MID not found
 mptbase: ioc0: IOCStatus(0x004b): SCSI IOC Terminated
 st0: Error 8 (sugg. bt 0x0, driver bt 0x0, host bt 0x8).
 mptscsih: ioc0: attempting task abort! (sc=81011bf35240)
 st 5:0:15:0: 
 command: Read(6): 08 00 00 fc 00 00
 mptbase: Initiating ioc0 recovery
 mptscsih: ioc0: task abort: SUCCESS (sc=81011bf35240)
 mptbase: ioc0: IOCStatus(0x0043): SCSI Device Not There
 mptscsih: ioc0: attempting target reset! (sc=81011bf35240)
 st 5:0:15:0: 
 command: Read(6): 08 00 00 fc 00 00
 mptscsih: ioc0: target reset: SUCCESS (sc=81011bf35240)
 mptbase: ioc0: IOCStatus(0x0043): SCSI Device Not There
 mptscsih: ioc0: attempting bus reset! (sc=81011bf35240)
 st 5:0:15:0: 
 command: Read(6): 08 00 00 fc 00 00
 mptscsih: ioc0: bus reset: SUCCESS (sc=81011bf35240)
 mptbase: ioc0: IOCStatus(0x0047): SCSI Protocol Error
 mptscsih: ioc0: Attempting host reset! (sc=81011bf35240)
 mptbase: Initiating ioc0 recovery
 mptbase: ioc0: IOCStatus(0x0047): SCSI Protocol Error
 st 5:0:15:0: scsi: Device offlined - not ready after error recovery
 st0: Error 8 (sugg. bt 0x0, driver bt 0x0, host bt 0x8).
  target5:0:15: Beginning Domain Validation
  target5:0:15: Domain Validation skipping write tests
  target5:0:15: Ending Domain Validation
  target5:0:15: FAST-80 WIDE SCSI 160.0 MB/s DT (12.5 ns, offset 126)
  target5:0:15: Beginning Domain Validation
  target5:0:15: Domain Validation skipping write tests
  target5:0:15: Ending Domain Validation
  target5:0:15: FAST-80 WIDE SCSI 160.0 MB/s DT (12.5 ns, offset 126)
  target5:0:15: Beginning Domain Validation
  target5:0:15: Domain Validation skipping write tests
  target5:0:15: Ending Domain Validation
  target5:0:15: FAST-80 WIDE SCSI 160.0 MB/s DT (12.5 ns, offset 126)
  target5:0:15: Beginning Domain Validation
  target5:0:15: Domain Validation skipping write tests
  target5:0:15: Ending Domain Validation
  target5:0:15: FAST-80 WIDE SCSI 160.0 MB/s DT (12.5 ns, offset 126)
 
 i've reseated my cables and terminator. reseated the scsi card. any idea
 where the problem is? this is centOS 5, kernel is:
 
 2.6.18-8.1.14.el5 #1 

Re: [Bacula-users] scsi problems

2007-11-08 Thread Michael Lewinger
Hi Michael.

Firstly, I'm glad you solved the problem and shared it with the list.

I'm also having problems with the SCSI tape I'm trying to use, so maybe I'll
profit from your experience as well. Have you succeeded in pinpointing the
relvant change ?

Michgael



On Nov 8, 2007 1:50 PM, Michael Galloway [EMAIL PROTECTED] wrote:

 progress with this issue. i submitted a bug report to adaptec and they
 finally provided some suggestions to help resolve this. here is what
 they recommeded:

   Go to Configure/View Host Adapter Settings.

   If the SCSI Controller does not have the system boot device attached,
   disable the BIOS. On SCSI Controllers with 2 channels, the BIOS of the
   channel that does not have the boot device, can be disabled.

   To do this, go to Advanced Configuration and set SCSI Controller Int
   13 Support to Disabled. If you boot from a SCSI device attached with
   the SCSI controller, leave the SCSI Controller Int 13 Support at
   Enabled.

   Under Advanced Configuration set Domain Validation to Disabled.

   Press Esc to exit.

   Go to SCSI Device Configuration.

   For the SCSI ID of the tape drive or tape library, set Initiate Wide
   Negotiation to No. This will automatically change the Sync Transfer
   Rate to 40MB/s, Packetized to No, QAS to No, and BIOS
   Multiple LUN Support to No. BIOS Multiple LUN Support can be changed
   back to Yes if needed.

   For the SCSI ID of the tape drive or tape library, set Enable
   Disconnection to No.

   For the SCSI ID of the tape drive or tape library, set Send Start Unit
   Command to No.

   Press Esc twice to exit, save the changes.

   Press Esc again, exit the utility and reboot the system.

 with these changes implemented, the btape test passes (with a couple of
 modifications to bacula-sd.conf) and the autochanger test passes.

 of of curiosity, what are others with LTO-4 using for scsi adapaters?

 -- michael

 On Mon, Oct 29, 2007 at 09:43:09PM -0400, Michael Galloway wrote:
  seem to be having some scsi problems with btape test. this test is with
  a spectra T50/LTO-4 attached via an LSI LSIU320 controller. i ran 100GB
  of data onto the drive with tar with no issue. but when i run this:
 
  ./btape -c bacula-sd.conf /dev/nst0
 
  test
 
  i get:
 
  *test
 
  === Write, rewind, and re-read test ===
 
  I'm going to write 1000 records and an EOF
  then write 1000 records and an EOF, then rewind,
  and re-read the data to verify that it is correct.
 
  This is an *essential* feature ...
 
  btape: btape.c:827 Wrote 1000 blocks of 64412 bytes.
  btape: btape.c:501 Wrote 1 EOF to LTO4 (/dev/nst0)
  btape: btape.c:843 Wrote 1000 blocks of 64412 bytes.
  btape: btape.c:501 Wrote 1 EOF to LTO4 (/dev/nst0)
  btape: btape.c:852 Rewind OK.
  1000 blocks re-read correctly.
  29-Oct 21:27 btape JobId 0: Error: block.c:995 Read error on fd=3 at
 file:blk 0:1000 on device LTO4 (/dev/nst0). ERR=No such device or address.
  btape: btape.c:864 Read block 1001 failed! ERR=No such device or address
 
  and i the kernel ring buffer log:
 
  st0: Block limits 1 - 16777215 bytes.
  mptbase: ioc0: LogInfo(0x11010f00): F/W: bug! MID not found
  mptbase: ioc0: LogInfo(0x11010f00): F/W: bug! MID not found
  mptbase: ioc0: IOCStatus(0x004b): SCSI IOC Terminated
  st0: Error 8 (sugg. bt 0x0, driver bt 0x0, host bt 0x8).
  mptscsih: ioc0: attempting task abort! (sc=81011bf35240)
  st 5:0:15:0:
  command: Read(6): 08 00 00 fc 00 00
  mptbase: Initiating ioc0 recovery
  mptscsih: ioc0: task abort: SUCCESS (sc=81011bf35240)
  mptbase: ioc0: IOCStatus(0x0043): SCSI Device Not There
  mptscsih: ioc0: attempting target reset! (sc=81011bf35240)
  st 5:0:15:0:
  command: Read(6): 08 00 00 fc 00 00
  mptscsih: ioc0: target reset: SUCCESS (sc=81011bf35240)
  mptbase: ioc0: IOCStatus(0x0043): SCSI Device Not There
  mptscsih: ioc0: attempting bus reset! (sc=81011bf35240)
  st 5:0:15:0:
  command: Read(6): 08 00 00 fc 00 00
  mptscsih: ioc0: bus reset: SUCCESS (sc=81011bf35240)
  mptbase: ioc0: IOCStatus(0x0047): SCSI Protocol Error
  mptscsih: ioc0: Attempting host reset! (sc=81011bf35240)
  mptbase: Initiating ioc0 recovery
  mptbase: ioc0: IOCStatus(0x0047): SCSI Protocol Error
  st 5:0:15:0: scsi: Device offlined - not ready after error recovery
  st0: Error 8 (sugg. bt 0x0, driver bt 0x0, host bt 0x8).
   target5:0:15: Beginning Domain Validation
   target5:0:15: Domain Validation skipping write tests
   target5:0:15: Ending Domain Validation
   target5:0:15: FAST-80 WIDE SCSI 160.0 MB/s DT (12.5 ns, offset 126)
   target5:0:15: Beginning Domain Validation
   target5:0:15: Domain Validation skipping write tests
   target5:0:15: Ending Domain Validation
   target5:0:15: FAST-80 WIDE SCSI 160.0 MB/s DT (12.5 ns, offset 126)
   target5:0:15: Beginning Domain Validation
   target5:0:15: Domain Validation skipping write tests
   target5:0:15: Ending Domain Validation
   target5:0:15: FAST-80 WIDE SCSI 

Re: [Bacula-users] Best backup strategy with bacula and autochanger

2007-11-08 Thread Arno Lehmann
Hello,

08.11.2007 11:32,, S. Kremer wrote::
 Hi,
 
 i have to backup a nas system with a total size of 6,5TB. At this
 time the nas system only has ~2TB of data that have to backup but
 in future, the next two years data will be increase up to 6TB.
 
 I have installed Debian Etch on the nas system with samba und
 bacula as backup application. I use a Sony StorStation LIB-81
 autochanger with 7 200GB/520GB AIT4 tapes. In slot 8 there is a
 cleaning tape.

7 slots à 200 GB is 1.4 TB. Adding compression (factor 2.6 is, 
typically, much too optimistic) you might end up at a bit above 2 GB.

That's obviously not enough space to handle a full backup of 6 TB.

 What is the best backup strategy for this scenario and how to
 handle the tapes with bacula and the autochanger.

You can run full backups only with intervention in between, which 
pretty much rules out unattended backups during the weekend.

 I would like backup each day mo-do incrementally/differentially.

That shouldn't be a problem with your setup.

You should set up the necessary jobs and schedules yourself - or get 
commercial support if you don't want to. Examples are freely available 
here, advice too, but you should offer us a starting point, i.e. a 
configuration and any problems you encounter.

 Each friday but the last friday of month i would like do a full
 backup and even so the last friday of month.

This will get difficult.

 Is this a good concept for backup the data or has anyone here a
 better suggestions.

Basically, it's usable. Whatever strategy you implement should simply 
fit to your needs.

If you want monthly full backups for archival, that can be done.

Storing the full 6 TB during one weekend will more or less be 
impossible with your current equipment.

 Stefan

Arno

-- 
Arno Lehmann
IT-Service Lehmann
www.its-lehmann.de

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now  http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] scsi problems

2007-11-08 Thread Michael Galloway
not yet, i just got this late yesterday, ran some preliminary testing just
to be sure its working. i will start undoing the changes today and see if
i can find the relevant factor. 

-- michael

On Thu, Nov 08, 2007 at 02:28:55PM +0200, Michael Lewinger wrote:
 Hi Michael.
 
 Firstly, I'm glad you solved the problem and shared it with the list.
 
 I'm also having problems with the SCSI tape I'm trying to use, so maybe I'll
 profit from your experience as well. Have you succeeded in pinpointing the
 relvant change ?
 
 Michgael
 
 
 

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now  http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Automatic tape eject after daily job - LTO-2

2007-11-08 Thread Erik P. Olsen
Matias Schwalm wrote:
 Hi folks,
 
 I have a Dell Server with an build-in LTO-2 Tapedrive. My goal is, that after 
 each daily job the tape get's ejected. I tried it with mt rewind and mt 
 eject in a runafter script. But it doesn't work. The Tape doesn't seem to 
 unmount it after the job. Here is my bacula-sd. Any suggestions? And, btw. 
 how do I teach Bacula that he has to mount the tape itself?
 
 # BackupLTO2
 #---
 Device {
 Name = LTO-2 
 Media Type = LTO-2   
 Archive Device = /dev/nst0   
 Removable Media = yes
 RandomAccess = no
 Autochanger = no 
 Automatic Mount = yes
 Always Open = No 
 Label Media = Yes
 Alert Command = sh -c 'tapeinfo -f %c |grep TapeAlert|cat'
 }

Use: RunAfterJob = your-eject-tape-script
and create your-eject-tape-script containing:
#!/bin/sh
#
# This script ejects a tape
#
echo unmount tape-device | bacula-path/bconsole -c 
bacula-path/bconsole.conf

The echo statement in one line.

-- 
Erik.


-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now  http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Restore from file really really slow

2007-11-08 Thread Julian Perry
  i read somewhere, thats this problem is solved in newer versions of
  bacula (think it was 2.2.*).

  The thought of upgrading the whole environment to
  a 2.+ release is more that I could cope with right
  now! 

 I only have about 10 clients, but I did the whole thing in about 35 mins
 the other day. I've been using the version provided by BlastWave, which
 provides an easy update mechanism. They also provide the SMF information
 used in Solaris 10 instead of an init script. For Linux, I've just been
 using the RPMs that are already available. 1.38.11 is a little bigger of
 an upgrade than 2.0.3 to 2.2.5 (which is what I did), but the upgrade
 from 1.38.11 to 2.0.3 (that is when I switched to the CSW packages) was
 pretty painless too.

We use Solaris, Linux, HP-UX, FreeBSD and Windows - so upgrades
are painful.  However - I have now upgraded the server to 2.2.5
and it talks to old FD's on all the clients (except HP-UX) fine.
I will of course get around to upgrading all the FD's in due
course.

However ... while the 2.2.5 restore code is clearly using seek()
to get to the right place in the volume (file) - which takes
about 1 day off the restore time - it still reads really really
slowly.  I can only assume this is caused by the fact the files
are NFS mounted - I just can't understand why it would be so
slow when writing the volumes is very fast!

Your thoughts welcome - thanks.
--
Cheers
Jules.



-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now  http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] BAT

2007-11-08 Thread Alan Brown
On Wed, 7 Nov 2007, Dep, Khushil (GE Money) wrote:

 Now I know the QT4 libs are installed but it seems that pkg-config
 doesn't know about them. I know this is OT so but I thought I'd askf if
 anyone knew where I could get help from or any further reading about
 this. I'm forced into tuse RHEL4u4 instead of my own flavour or choice -
 Debian, so any help or pointers would be much appreciated.

please do rpm -qa | grep qt

I'd like to see if you have qt (3.3) and qt4 (4.x) installaed as separate 
packages, as I've just flagged this up as a problem in RHEL5 with the src 
rpm packager


-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now  http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] If there's a problem, it just hangs

2007-11-08 Thread Chris Howells
[EMAIL PROTECTED] wrote:
 For example;
 
 # ./mtx-changer /dev/sg0 unload 1 /dev/nst0 0
 
 It took the tape out of the drive, then hung with;
 Storage Element 1 is Already Full
 
 That's it, just hangs. Now I have to restart the tape device. Should bacula 
 not handle this or is there another problem?

mtx-changer is just a script which calls a command such as 'mtx'. Check 
whether mtx can handle your autochanger (you should be able to find this 
out from google or this list's archives). If not, your life is more 
difficult. If it can, you need to figure out what mtx-changer is doing 
that your autochanger doesn't like. The script is well commented and has 
a debugging option. Have a play around.

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now  http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Automatic tape eject after daily job - LTO-2

2007-11-08 Thread Matias Schwalm
Hi folks,

I have a Dell Server with an build-in LTO-2 Tapedrive. My goal is, that after 
each daily job the tape get's ejected. I tried it with mt rewind and mt 
eject in a runafter script. But it doesn't work. The Tape doesn't seem to 
unmount it after the job. Here is my bacula-sd. Any suggestions? And, btw. how 
do I teach Bacula that he has to mount the tape itself?

# BackupLTO2
#---
Device {
Name = LTO-2 
Media Type = LTO-2   
Archive Device = /dev/nst0   
Removable Media = yes
RandomAccess = no
Autochanger = no 
Automatic Mount = yes
Always Open = No 
Label Media = Yes
Alert Command = sh -c 'tapeinfo -f %c |grep TapeAlert|cat'
}

Regards,

Matthias



-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now  http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Tapes(Volumes) put into ERROR status

2007-11-08 Thread Arno Lehmann
Hi,

08.11.2007 15:17,, Win Htin wrote::
 Hi folks,
 
 Just as I thought I have fixed my problems, I am again stumped with 
 Volumes being put into Error with  /*Error: Unable to position to end 
 of data on device*/ messages.
 
 As previously mentioned in my other post, all the tapes were erased once 
 over to make sure everything starts cleanly. The tapes were bought brand 
 new from IBM and some were used previously for my Bacula tests (btape + 
 actual backups).
 
 analysing the pattern, I suspect once all the backups are done EOF/EOD 
 is not written to the tape/volume causing next run(s) to put the tape 
 into Error and use the next available volume.

Have you set Two EOF = Yes in the SD configuration? Seems like 
setting that, and BSF at EOM, too, might help you... you'll have to try.

Arno

 Following is part of the output from btape tests I did earlier.
 
 Doing Bacula scan of blocks:
 1 block of 64448 bytes in file 1
 End of File mark.
 2 blocks of 64448 bytes in file 2
 End of File mark.
 3 blocks of 64448 bytes in file 3
 End of File mark.
 1 block of 64448 bytes in file 4
 End of File mark.
 * 17-Oct 09:43 btape JobId 0: Error: block.c:995 Read error on fd=3 at 
 file:blk 4:0 on device LTO4_1 (/dev/IBMtape0). ERR=Input/output error. *
 Total files=4, blocks=7, bytes = 451,136
 End scanning the tape.
 We should be in file 4. I am at file 4. This is correct!
 
 Following is the excerpt of last night's backups.
 JobBackup   Volume   Poolstart   
 finishStatus
 --
  
 
 NFSSERVERFull625AAL   AllFulls 23:00:00  
 23:21:40   Appended to
  
   Volume 625AAL
 
 APP06Differential 401AAL   AllDifferentials  23:21:42  
 23:25:36   Put 400AAL in
  
   Error  used
   
 
 401AAL
 07-Nov 23:22 nfsserver-sd JobId 12: 3307 Issuing autochanger unload 
 slot 21, drive 0 command.
 07-Nov 23:23 nfsserver-sd JobId 12: 3304 Issuing autochanger load slot 
 1, drive 0 command.
 07-Nov 23:24 nfsserver-sd JobId 12: 3305 Autochanger load slot 1, drive 
 0, status is OK.
 07-Nov 23:24 nfsserver-sd JobId 12: 3301 Issuing autochanger loaded? 
 drive 0 command.
 07-Nov 23:24 nfsserver-sd JobId 12: 3302 Autochanger loaded? drive 0, 
 result is Slot 1.
 07-Nov 23:24 nfsserver-sd JobId 12: Volume 400AAL previously written, 
 moving to end of data.
 07-Nov 23:24 nfsserver-sd JobId 12: Error: Unable to position to end of 
 data on device LTO4_1 (/dev/IBMtape0): ERR= dev.c:1326 read error on 
 LTO4_1 (/dev/IBMtape0). ERR=Input/output error.
 
 07-Nov 23:24 nfsserver-sd JobId 12: Marking Volume 400AAL in Error in 
 Catalog.
 07-Nov 23:24 nfsserver-sd JobId 12: 3307 Issuing autochanger unload 
 slot 1, drive 0 command.
 07-Nov 23:24 nfsserver-sd JobId 12: 3304 Issuing autochanger load slot 
 2, drive 0 command.
 07-Nov 23:25 nfsserver-sd JobId 12: 3305 Autochanger load slot 2, drive 
 0, status is OK.
 07-Nov 23:25 nfsserver-sd JobId 12: 3301 Issuing autochanger loaded? 
 drive 0 command.
 07-Nov 23:25 nfsserver-sd JobId 12: 3302 Autochanger loaded? drive 0, 
 result is Slot 2.
 07-Nov 23:25 nfsserver-sd JobId 12: Wrote label to prelabeled Volume 
 401AAL on device LTO4_1 (/dev/IBMtape0)
 
 
 APP07Differential 401AAL   AllDifferentials   23:25:38  23:25:52 
   Appended to
   
   
 Volume 401AAL
  
 APP08Differentail 401AAL  AllDifferentials  23:25:53   23:25:57 
   Appended to
  
   Volume 401AAL
 
 APP09Differential 401AAL  AllDifferentials 23:25:59
 23:26:31  Appended to
   
 
 Volume 401AAL
 
 APP10Differential 401AAL AllDifferentials 23:26:32
 23:46:27   Appended to
  
  Volume 401AAL
 
 CATALOG  Full 627AAL AllFulls   04:00:08
 04:14:45Put 625AAL in
   
 
 Error  u sed
  
  627AAL
 

[Bacula-users] Tapes(Volumes) put into ERROR status

2007-11-08 Thread Win Htin
Hi folks,

Just as I thought I have fixed my problems, I am again stumped with Volumes
being put into Error with  *Error: Unable to position to end of data on
device* messages.

As previously mentioned in my other post, all the tapes were erased once
over to make sure everything starts cleanly. The tapes were bought brand new
from IBM and some were used previously for my Bacula tests (btape + actual
backups).

analysing the pattern, I suspect once all the backups are done EOF/EOD is
not written to the tape/volume causing next run(s) to put the tape into
Error and use the next available volume.

Following is part of the output from btape tests I did earlier.

Doing Bacula scan of blocks:
1 block of 64448 bytes in file 1
End of File mark.
2 blocks of 64448 bytes in file 2
End of File mark.
3 blocks of 64448 bytes in file 3
End of File mark.
1 block of 64448 bytes in file 4
End of File mark.
*17-Oct 09:43 btape JobId 0: Error: block.c:995 Read error on fd=3 at
file:blk 4:0 on device LTO4_1 (/dev/IBMtape0). ERR=Input/output error.*
Total files=4, blocks=7, bytes = 451,136
End scanning the tape.
We should be in file 4. I am at file 4. This is correct!

Following is the excerpt of last night's backups.
JobBackup   Volume   Poolstart
finishStatus
--
NFSSERVERFull625AAL   AllFulls 23:00:00
23:21:40   Appended to

  Volume 625AAL

APP06Differential 401AAL   AllDifferentials  23:21:42  23:25:36
  Put 400AAL in

  Error  used

401AAL
07-Nov 23:22 nfsserver-sd JobId 12: 3307 Issuing autochanger unload slot
21, drive 0 command.
07-Nov 23:23 nfsserver-sd JobId 12: 3304 Issuing autochanger load slot 1,
drive 0 command.
07-Nov 23:24 nfsserver-sd JobId 12: 3305 Autochanger load slot 1, drive 0,
status is OK.
07-Nov 23:24 nfsserver-sd JobId 12: 3301 Issuing autochanger loaded? drive
0 command.
07-Nov 23:24 nfsserver-sd JobId 12: 3302 Autochanger loaded? drive 0,
result is Slot 1.
07-Nov 23:24 nfsserver-sd JobId 12: Volume 400AAL previously written,
moving to end of data.
07-Nov 23:24 nfsserver-sd JobId 12: Error: Unable to position to end of data
on device LTO4_1 (/dev/IBMtape0): ERR=dev.c:1326 read error on LTO4_1
(/dev/IBMtape0). ERR=Input/output error.

07-Nov 23:24 nfsserver-sd JobId 12: Marking Volume 400AAL in Error in
Catalog.
07-Nov 23:24 nfsserver-sd JobId 12: 3307 Issuing autochanger unload slot 1,
drive 0 command.
07-Nov 23:24 nfsserver-sd JobId 12: 3304 Issuing autochanger load slot 2,
drive 0 command.
07-Nov 23:25 nfsserver-sd JobId 12: 3305 Autochanger load slot 2, drive 0,
status is OK.
07-Nov 23:25 nfsserver-sd JobId 12: 3301 Issuing autochanger loaded? drive
0 command.
07-Nov 23:25 nfsserver-sd JobId 12: 3302 Autochanger loaded? drive 0,
result is Slot 2.
07-Nov 23:25 nfsserver-sd JobId 12: Wrote label to prelabeled Volume
401AAL on device LTO4_1 (/dev/IBMtape0)


APP07Differential 401AAL   AllDifferentials   23:25:38  23:25:52
  Appended to

Volume 401AAL

APP08Differentail 401AAL  AllDifferentials  23:25:53   23:25:57
  Appended to

  Volume 401AAL

APP09Differential 401AAL  AllDifferentials 23:25:59
23:26:31  Appended to

Volume 401AAL

APP10Differential 401AAL AllDifferentials 23:26:32
23:46:27   Appended to

 Volume 401AAL

CATALOG  Full 627AAL AllFulls   04:00:08
04:14:45Put 625AAL in

Error  used

 627AAL
08-Nov 04:00 nfsserver-sd JobId 17: 3307 Issuing autochanger unload slot 2,
drive 0 command.
08-Nov 04:01 nfsserver-sd JobId 17: 3304 Issuing autochanger load slot 21,
drive 0 command.
08-Nov 04:02 nfsserver-sd JobId 17: 3305 Autochanger load slot 21, drive
0, status is OK.
08-Nov 04:02 nfsserver-sd JobId 17: 3301 Issuing autochanger loaded? drive
0 command.
08-Nov 04:02 nfsserver-sd JobId 17: 3302 Autochanger loaded? drive 0,
result is Slot 21.
08-Nov 04:02 nfsserver-sd JobId 17: Volume 625AAL previously written,
moving to end of data.
08-Nov 04:12 nfsserver-sd JobId 17: Error: Unable to position to end of data
on device LTO4_1 (/dev/IBMtape0): ERR=dev.c:1326 read error on LTO4_1
(/dev/IBMtape0). ERR=Input/output error.

08-Nov 04:12 nfsserver-sd JobId 17: Marking Volume 625AAL in Error in
Catalog.
08-Nov 04:13 nfsserver-sd JobId 17: 3307 Issuing autochanger unload slot
21, drive 0 command.
08-Nov 04:14 nfsserver-sd JobId 17: 3304 Issuing autochanger load slot 23,
drive 0 command.
08-Nov 04:14 nfsserver-sd JobId 17: 3305 Autochanger load slot 23, drive
0, status is OK.
08-Nov 04:14 nfsserver-sd JobId 17: 3301 Issuing autochanger loaded? drive
0 command.
08-Nov 04:14 nfsserver-sd JobId 17: 3302 

[Bacula-users] Tape full with Daily Tape Rotation

2007-11-08 Thread Abdel Amiche
Hi,

I copy the strategie of Daily tape rotation I found at this link
http://www.bacula.org/rel-manual/Backup_Strategies.html#SECTION003033000
it works good but I meet a problem.My pools are always in append mode, so
before doing the backup the system move to the end of data, even the
retention period of the pool is passed.
So My tape become full and the system tell me to label a new volume, but I
choose to have only one volume by pool.

08-Nov 04:10 BRU1003B: Start Backup JobId 297,
Job=Backup_pstor003_Daily_Tape.2007-11-08_04.10.00
08-Nov 04:10 MainSD: Volume BRU1003B:002 previously written, moving to end
of data.
08-Nov 04:10 MainSD: Ready to append to end of Volume BRU1003B:002 at
file=8.
08-Nov 07:00 MainSD: End of Volume BRU1003B:002 at 34:15347 on device
DDS-4 (/dev/st0). Write of 64512 bytes got -1.
08-Nov 07:00 MainSD: Re-read of last block succeeded.
08-Nov 07:00 MainSD: End of medium on Volume BRU1003B:002
Bytes=31,786,675,200 Blocks=492,724 at 08-Nov-2007 07:00.
08-Nov 07:02 BRU1003B: Pruned 4 Jobs on Volume BRU1003B:002 from catalog.
08-Nov 07:02 MainSD: Job Backup_pstor003_Daily_Tape.2007-11-08_04.10.00
waiting. Cannot find any appendable volumes.

I want that before the system do the backup, the tape is overwritted (and
not changing my retention period of 6days)

Thanks for your help

Abdel
-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now  http://get.splunk.com/___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Odd Restores

2007-11-08 Thread Dep, Khushil (GE Money)
Hey All,
 
Has anyone here had 'ghost' restores? Basically restores being started
that, well, no one has started! I got the two following e-mail
notifications:
 
08-Nov 14:31 MigWebSSCbkp1-dir JobId 361: Start Restore Job
DefaultRestoreJob.2007-11-08_14.31.05
08-Nov 14:31 MigWebSSCbkp1-dir JobId 361: Using Device
BackupStorageLocation
08-Nov 14:31 MigWebSSCbkp1-sd JobId 361: Ready to read from volume
BKP_FULL_0010 on device BackupStorageLocation
(/webssc_bkp/bacula/backups).
08-Nov 14:31 MigWebSSCbkp1-sd JobId 361: Forward spacing Volume
BKP_FULL_0010 to file:block outbind://264/block  0:214.
08-Nov 14:31 MigWebSSCbkp1-sd JobId 361: End of Volume at file 0 on
device BackupStorageLocation (/webssc_bkp/bacula/backups), Volume
BKP_FULL_0010
08-Nov 14:31 MigWebSSCbkp1-sd JobId 361: End of all volumes.
08-Nov 14:31 MigWebSSCbkp1-dir JobId 361: Bacula MigWebSSCbkp1-dir 2.2.5
(09Oct07): 08-Nov-2007 14:31:50
Build OS: i686-pc-linux-gnu redhat Enterprise release
JobId: 361
Job: DefaultRestoreJob.2007-11-08_14.31.05
Restore Client: sscchbpdb01-fd
Start time: 08-Nov-2007 14:31:48
End time: 08-Nov-2007 14:31:50
Files Expected: 1
Files Restored: 1
Bytes Restored: 1,612,428
Rate: 806.2 KB/s
FD Errors: 0
FD termination status: OK
SD termination status: OK
Termination: Restore OK
08-Nov 14:31 MigWebSSCbkp1-dir JobId 361: Begin pruning Jobs.
08-Nov 14:31 MigWebSSCbkp1-dir JobId 361: Pruned 1 Job for client
sscchbpdb01-fd from catalog.
08-Nov 14:31 MigWebSSCbkp1-dir JobId 361: Begin pruning Files.
08-Nov 14:31 MigWebSSCbkp1-dir JobId 361: No Files found to prune.
08-Nov 14:31 MigWebSSCbkp1-dir JobId 361: End auto prune.

and

08-Nov 14:39 MigWebSSCbkp1-dir JobId 362: Start Restore Job
DefaultRestoreJob.2007-11-08_14.39.08
08-Nov 14:39 MigWebSSCbkp1-dir JobId 362: Using Device
BackupStorageLocation
08-Nov 14:39 MigWebSSCbkp1-sd JobId 362: Ready to read from volume
BKP_INCR_0011 on device BackupStorageLocation
(/webssc_bkp/bacula/backups).
08-Nov 14:39 MigWebSSCbkp1-sd JobId 362: Forward spacing Volume
BKP_INCR_0011 to file:block outbind://264/block  0:221.
08-Nov 14:39 MigWebSSCbkp1-sd JobId 362: End of Volume at file 0 on
device BackupStorageLocation (/webssc_bkp/bacula/backups), Volume
BKP_INCR_0011
08-Nov 14:39 MigWebSSCbkp1-sd JobId 362: End of all volumes.
08-Nov 14:39 MigWebSSCbkp1-dir JobId 362: Bacula MigWebSSCbkp1-dir 2.2.5
(09Oct07): 08-Nov-2007 14:39:56
Build OS: i686-pc-linux-gnu redhat Enterprise release
JobId: 362
Job: DefaultRestoreJob.2007-11-08_14.39.08
Restore Client: sscchbpdb01-fd
Start time: 08-Nov-2007 14:39:54
End time: 08-Nov-2007 14:39:56
Files Expected: 1
Files Restored: 1
Bytes Restored: 2,097,152
Rate: 1048.6 KB/s
FD Errors: 0
FD termination status: OK
SD termination status: OK
Termination: Restore OK
08-Nov 14:39 MigWebSSCbkp1-dir JobId 362: Begin pruning Jobs.
08-Nov 14:39 MigWebSSCbkp1-dir JobId 362: No Jobs found to prune.
08-Nov 14:39 MigWebSSCbkp1-dir JobId 362: Begin pruning Files.
08-Nov 14:39 MigWebSSCbkp1-dir JobId 362: No Files found to prune.
08-Nov 14:39 MigWebSSCbkp1-dir JobId 362: End auto prune.

Now the thing is only I have access to this machine I certainly did not
kick these restores off. *ponders*


-Khush

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now  http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] VolumeToCatalog connecting to FD

2007-11-08 Thread Jason Martin
I performed a scheduled incremental backup (to a file-device) of
my system, followed by a VolumeToCatalog verify. The verify failed and included
the following log entries:

07-Nov 23:19 butler-dir JobId 428: Fatal error: verify.c:730
bdirdfiled: bad attributes from filed n=-2 : No data available
07-Nov 23:19 butler-dir JobId 428: Fatal error: Network error
with FD during Verify: ERR=No data available
07-Nov 23:19 butler-sd JobId 428: Job
MalVerify.2007-11-07_23.10.03 marked to be canceled.
07-Nov 23:19 butler-dir JobId 428: Fatal error: No Job status
returned from FD.

I don't understand why VolumeToCatalog involes the FD -- I would
think the FD is only involved in a DiskToCatalog check. Any
suggestions?

Thank you,
-Jason Martin

-- 
This message is PGP/MIME signed.


pgpdzyrPudeQk.pgp
Description: PGP signature
-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now  http://get.splunk.com/___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] BAT

2007-11-08 Thread Alan Brown
On Thu, 8 Nov 2007, Dep, Khushil (GE Money) wrote:

 In the end I ended up building QT4 and qwt from source! Oh give me back
 my debian boxes! *cries*

 Ta for all the help tho folks! :-)

No need for that, see the SPEC file I posted here a few days ago.


-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now  http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Backups not writing data to tapes

2007-11-08 Thread Eric Mauch
Hi,

I'm experiencing some issues with my Bacula tape backup.  My operating
system is Linux gentoo 1.12.6 and my Bacula version is 1.38.11.

 

My issue is that one of my backups reported a VolStatus of Full at only
16GB (the tape capacity is 200GB).  I then inserted a new tape that had
a VolStatus of Recycle and the VolRetention of 28 days had expired since
the LastWritten date.  The problem is that nothing was written to the
new tape.  I was patient and was sure to let it site for a while.

 

I've also tried a couple of other tapes (which the drive is recognizing
fine) and receive the same results.  I've manually set the VolStatus to
Recycle and that hasn't helped either.

 

I'd greatly appreciate any advice/suggestions.

 

Eric

 

 

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now  http://get.splunk.com/___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] BAT

2007-11-08 Thread Dep, Khushil (GE Money)
In the end I ended up building QT4 and qwt from source! Oh give me back
my debian boxes! *cries*

Ta for all the help tho folks! :-) 

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Augusto
Camarotti
Sent: 08 November 2007 16:50
To: bacula-users@lists.sourceforge.net
Subject: Re: [Bacula-users] BAT

On Nov 8, 2007 12:48 PM, Augusto Camarotti [EMAIL PROTECTED] wrote:
 Try to check if pkgconfig it`s getting the right paths for the QT4 
 libs. This kind of problem happened to me, i had QT4 installed and 
 pkgconfig wasn`t finding them.

 Augusto


 On Nov 8, 2007 6:43 AM, Alan Brown [EMAIL PROTECTED] wrote:
  On Wed, 7 Nov 2007, Dep, Khushil (GE Money) wrote:
 
   Now I know the QT4 libs are installed but it seems that pkg-config

   doesn't know about them. I know this is OT so but I thought I'd 
   askf if anyone knew where I could get help from or any further 
   reading about this. I'm forced into tuse RHEL4u4 instead of my own

   flavour or choice - Debian, so any help or pointers would be much
appreciated.
 
  please do rpm -qa | grep qt
 
  I'd like to see if you have qt (3.3) and qt4 (4.x) installaed as 
  separate packages, as I've just flagged this up as a problem in 
  RHEL5 with the src rpm packager
 
 
  
  - This SF.net email is sponsored by: Splunk Inc.
  Still grepping through log files to find problems?  Stop.
  Now Search log events and configuration files using AJAX and a
browser.
  Download your FREE copy of Splunk now  http://get.splunk.com/ 
  ___
  Bacula-users mailing list
  Bacula-users@lists.sourceforge.net
  https://lists.sourceforge.net/lists/listinfo/bacula-users
 



-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now  http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now  http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Doesn't want to recycle tape in the drive

2007-11-08 Thread Attila Fülöp
Adam Cécile wrote:
 Hi,
 
 I really need help ;)
 Moreover, do you think, creating one pool for each week day could be a 
 good workaround ?

Yes, at least I do it that way.

 Adam Cécile a écrit :
 Hi,

 Here is my problem:
 I have five tapes (Dailly-00[1-5]).
 Let's assume Daily-005 is purged and Daily-001 is used (but data expired).
 If Daily-001 is in the drive, Bacula won't recycle it, instead it will 
 keep asking for Daily-005.
 If I set Daily-005 as disabled, bacula automatically purge Daily-001 and 
 recycle it.

 Any tips ?

 Best regards, Adam.

 (tested with bacula 2.2.0 and 2.2.5 on a Debian Etch system, with only 
 one external LTO2 Dell tape drive).

   
 
 



-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now  http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Include statements in bacula-dir.conf file

2007-11-08 Thread Chris Howells
David Gardner wrote:

  Along the same line of reasoning, can the developers add the reload 
  argument to the Bacula executable? This would be nice when a small
  tweak to the bacula-dir.conf file has been made and does not require 
a  full restart of the system.

Already there, see the manual.

xxx:~# bconsole
Connecting to Director xxx:9101
1000 OK: xxx-dir Version: 2.2.5 (09 October 2007)
Enter a period to cancel a command.
*reload
You have messages.
*

And by full restart of the system you don't mean restart the entire 
computer, do you? ;)

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now  http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Tape devices and tapeinfo devices (Problems with the list resolved :)

2007-11-08 Thread Augusto Camarotti
By the way, I just decided to use my gmail account for subscribing
bacula-users. It`s much reliable than my organization e-mail which was
giving me so much trouble. :D

I have a Seagate Tape Driver DAT72.
In my system it`s referenced as /dev/st0.
So i configured my bacula-sd.conf this way :

Device {
  Name = DDS-72#
  Media Type = DDS-72
  Archive Device = /dev/st0
  AutomaticMount = yes;   # when device opened, read it
  AlwaysOpen = no;
  RemovableMedia = yes;
  RandomAccess = no;
# Changer Command = /etc/bacula/mtx-changer %c %o %S %a %d
  Changer Device = /dev/st0
  AutoChanger = no
  # Enable the Alert command only if you have the mtx package loaded
  Alert Command = sh -c 'tapeinfo -f %c |grep TapeAlert|cat'
# If you have smartctl, enable this, it has more info than tapeinfo
# Alert Command = sh -c 'smartctl -H -l error %c'
}

But it was giving this erros using tapeinfo with that device (/dev/st0) :

mtx: Request Sense: Long Report=yes
mtx: Request Sense: Valid Residual=no
mtx: Request Sense: Error Code=0 (Unknown?!)
mtx: Request Sense: Sense Key=No Sense
mtx: Request Sense: FileMark=no
mtx: Request Sense: EOM=no
mtx: Request Sense: ILI=no
mtx: Request Sense: Additional Sense Code = 00
mtx: Request Sense: Additional Sense Qualifier = 00
mtx: Request Sense: BPV=no
mtx: Request Sense: Error in CDB=no
mtx: Request Sense: SKSV=no
INQUIRY Command Failed

And this was showing after each job of Bacula. Then I discover that my tape
device have another name, /dev/sg4, that shows this when given to tapeinfo :

Product Type: Tape Drive
Vendor ID: 'SEAGATE '
Product ID: 'DATDAT72-052'
Revision: 'A060'
Attached Changer: No
SerialNumber: 'HV07G3D'
MinBlock:1
MaxBlock:16777215
SCSI ID: 6
SCSI LUN: 0
Ready: yes
BufferedMode: yes
Medium Type: 0x35
Density Code: 0x47
BlockSize: 0
DataCompEnabled: yes
DataCompCapable: yes
DataDeCompEnabled: yes
CompType: 0x20
DeCompType: 0x20
BOP: yes
Block Position: 0

So I change my storage configuration to that and I`ll be good?

Device {
  Name = DDS-72#
  Media Type = DDS-72
  Archive Device = /dev/st0
  AutomaticMount = yes;   # when device opened, read it
  AlwaysOpen = no;
  RemovableMedia = yes;
  RandomAccess = no;
# Changer Command = /etc/bacula/mtx-changer %c %o %S %a %d
  Changer Device = /dev/sg4
  AutoChanger = no
  # Enable the Alert command only if you have the mtx package loaded
  Alert Command = sh -c 'tapeinfo -f %c |grep TapeAlert|cat'
# If you have smartctl, enable this, it has more info than tapeinfo
# Alert Command = sh -c 'smartctl -H -l error %c'
}

Am I doing the correct thing? Why is that? Am I supposed to use /dev/sg4 as
my tape device? Cause i`m having problems with it, which I `ll explain in my
next e-mail.

Regards,

Augusto Camarotti
-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now  http://get.splunk.com/___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] can't restore ACL of /tmp/bacula-restores/*

2007-11-08 Thread Attila Fülöp
Doug Sampson wrote:
 I am testing the restore function of Bacula 2.2.5 on FreeBSD 6.2 and in the
 process of restoring ~20,000 files recursively from the /var directory, I am
 seeing numerous error messages as follows:
 
 ..snip..
 06-Nov 17:37 aries-fd JobId 2326: Warning: restore.c:588 Can't restore ACL
 of
 /tmp/bacula-restores/var/db/portsnap/files/49aef80a863cf2a80ab801b5f23f311d4
 d005adb8a96f5477bb50deef559fc75.gz
 06-Nov 17:37 aries-fd JobId 2326: Warning: restore.c:588 Can't restore ACL
 of
 /tmp/bacula-restores/var/db/portsnap/files/ddee802463e6bc7c5bd56e434f0b3d2c6
 3fc1763a7cb05463a77059112850563.gz
 06-Nov 17:37 aries-fd JobId 2326: Warning: restore.c:588 Can't restore ACL
 of
 /tmp/bacula-restores/var/db/portsnap/files/02acc9644ba6bae17c35136eab31744c9
 a1c89abb234640bb6cec379815955c3.gz
 06-Nov 17:37 aries-fd JobId 2326: Warning: restore.c:588 Can't restore ACL
 of
 /tmp/bacula-restores/var/db/portsnap/files/a42b6836952001e817fd7fc03d2b16020
 af444ca37a1d4b1625b67119c32620b.gz
 06-Nov 17:37 aries-fd JobId 2326: Warning: restore.c:588 Can't restore ACL
 of
 /tmp/bacula-restores/var/db/portsnap/files/bdb4049dc8c64ba8272a4300e16f54db2
 29b418df694abab81a5b2c3f0843800.gz
 ..snip..

What kind of files are those? Any chance they are dangling symlinks?

 [EMAIL PROTECTED]:/tmp/bacula-restores# tail /var/log/messages
 Nov  6 17:23:16 aries postgres[24308]: [11-1] ERROR:  unrecognized
 configuration parameter standard_conforming_strings
 Nov  6 17:25:41 aries postgres[24316]: [11-1] ERROR:  unrecognized
 configuration parameter standard_conforming_strings
 Nov  6 17:26:11 aries postgres[24316]: [12-1] ERROR:  table delcandidates
 does not exist
 Nov  6 17:26:11 aries postgres[24316]: [13-1] ERROR:  index delinx1 does
 not exist
 Nov  6 17:26:11 aries postgres[24316]: [14-1] ERROR:  index delinx1 does
 not exist
 Nov  6 17:30:35 aries postgres[24328]: [11-1] ERROR:  unrecognized
 configuration parameter standard_conforming_strings
 Nov  6 17:37:44 aries postgres[24328]: [12-1] ERROR:  table delcandidates
 does not exist
 Nov  6 17:37:44 aries postgres[24328]: [13-1] ERROR:  index delinx1 does
 not exist
 Nov  6 17:37:44 aries postgres[24328]: [14-1] ERROR:  index delinx1 does
 not exist
 
 Is the Bacula database corrupted? Why aren't the ACLs not being restored?
 
 ~Doug
 
 
 -
 This SF.net email is sponsored by: Splunk Inc.
 Still grepping through log files to find problems?  Stop.
 Now Search log events and configuration files using AJAX and a browser.
 Download your FREE copy of Splunk now  http://get.splunk.com/
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users
 
 


-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now  http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] BAT

2007-11-08 Thread Augusto Camarotti
On Nov 8, 2007 12:48 PM, Augusto Camarotti [EMAIL PROTECTED] wrote:
 Try to check if pkgconfig it`s getting the right paths for the QT4
 libs. This kind of problem happened to me, i had QT4 installed and
 pkgconfig wasn`t finding them.

 Augusto


 On Nov 8, 2007 6:43 AM, Alan Brown [EMAIL PROTECTED] wrote:
  On Wed, 7 Nov 2007, Dep, Khushil (GE Money) wrote:
 
   Now I know the QT4 libs are installed but it seems that pkg-config
   doesn't know about them. I know this is OT so but I thought I'd askf if
   anyone knew where I could get help from or any further reading about
   this. I'm forced into tuse RHEL4u4 instead of my own flavour or choice -
   Debian, so any help or pointers would be much appreciated.
 
  please do rpm -qa | grep qt
 
  I'd like to see if you have qt (3.3) and qt4 (4.x) installaed as separate
  packages, as I've just flagged this up as a problem in RHEL5 with the src
  rpm packager
 
 
  -
  This SF.net email is sponsored by: Splunk Inc.
  Still grepping through log files to find problems?  Stop.
  Now Search log events and configuration files using AJAX and a browser.
  Download your FREE copy of Splunk now  http://get.splunk.com/
  ___
  Bacula-users mailing list
  Bacula-users@lists.sourceforge.net
  https://lists.sourceforge.net/lists/listinfo/bacula-users
 


-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now  http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] can't restore ACL of /tmp/bacula-restores/*

2007-11-08 Thread Attila Fülöp
Arno Lehmann wrote:
 Hi,
 
 07.11.2007 02:41,, Doug Sampson wrote::
 I am testing the restore function of Bacula 2.2.5 on FreeBSD 6.2 and in the
 process of restoring ~20,000 files recursively from the /var directory, I am
 seeing numerous error messages as follows:

 ..snip..
 06-Nov 17:37 aries-fd JobId 2326: Warning: restore.c:588 Can't restore ACL
 of
 /tmp/bacula-restores/var/db/portsnap/files/49aef80a863cf2a80ab801b5f23f311d4
 d005adb8a96f5477bb50deef559fc75.gz
 
 Which file system is that? I suspect it's ZFS - ZFS ACLs are not 
 exactly supported at the moment. As far as I know, as long as you 
 don't use ACLs, there's nothing to be feared. Attila Fülöp wrote
 
 Just a few further notes: Unless you are using ACL on your zfs your
 data schould be save. You just have to live with the annoying error
 messages. You can check for this: find /zfs_mount_point -acl prints
 all files with ACL, ls -V file_with_acl show the associated ACL.
 
 He has a possible patch to fully support ZFS, but that needs work. See 
 the thread solaris zfs in the list archives...

Yes, the things missing are the regression scripts. The way the
regression scripts work currently pose some problems on implementing ACL
tests properly.

I also wrote

  If there is enough interest I could port the code to the current
  HEAD and start writing regression tests. A backport to 2.2.x
  shouldn't be a big problem then. The drawback is that my spare time
  is quite limited, so this may take a while.

and got no response at all, meaning that there seems to be no interest.
Of course I will work on that once I need zfs ACL support in bacula
myself ;-)

Attila

 ..snip..

 [EMAIL PROTECTED]:/tmp/bacula-restores# tail /var/log/messages
 Nov  6 17:23:16 aries postgres[24308]: [11-1] ERROR:  unrecognized
 configuration parameter standard_conforming_strings
 Nov  6 17:25:41 aries postgres[24316]: [11-1] ERROR:  unrecognized
 configuration parameter standard_conforming_strings
 Nov  6 17:26:11 aries postgres[24316]: [12-1] ERROR:  table delcandidates
 does not exist
 Nov  6 17:26:11 aries postgres[24316]: [13-1] ERROR:  index delinx1 does
 not exist
 Nov  6 17:26:11 aries postgres[24316]: [14-1] ERROR:  index delinx1 does
 not exist
 Nov  6 17:30:35 aries postgres[24328]: [11-1] ERROR:  unrecognized
 configuration parameter standard_conforming_strings
 Nov  6 17:37:44 aries postgres[24328]: [12-1] ERROR:  table delcandidates
 does not exist
 Nov  6 17:37:44 aries postgres[24328]: [13-1] ERROR:  index delinx1 does
 not exist
 Nov  6 17:37:44 aries postgres[24328]: [14-1] ERROR:  index delinx1 does
 not exist

 Is the Bacula database corrupted? Why aren't the ACLs not being restored?
 
 The catalog thing is a different problem. Which version of Bacula do 
 you run, and which PostgreSQL version? You should check the version 
 requirements, or perhaps verify that your catalog holds all the tables 
 and indexes needed. Just compare what the script to create the tables 
 does, and what you actually have in your catalog.
 
 Arno
 
 ~Doug


 -
 This SF.net email is sponsored by: Splunk Inc.
 Still grepping through log files to find problems?  Stop.
 Now Search log events and configuration files using AJAX and a browser.
 Download your FREE copy of Splunk now  http://get.splunk.com/
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users

 



-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now  http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Include statements in bacula-dir.conf file

2007-11-08 Thread David Gardner
Hey gang,

Can I break the bacula-dir.conf file in parts for easier management? 
Specifically, does bacula understand the include (or some variation thereof) 
to read in the various parts of the bacula-dir.conf file?

Along the same line of reasoning, can the developers add the reload argument 
to the Bacula executable? This would be nice when a small tweak to the 
bacula-dir.conf file has been made and does not require a full restart of the 
system.
 
/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
David Gardner
email: djgardner(at)yahoo.com
Yahoo! IM: djgardner
AIM: dgardner09 
Everything is a learning experience, even a mistake.




__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now  http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Include statements in bacula-dir.conf file

2007-11-08 Thread David Gardner
Heavens no, Chris!

I refer to Bacula as the system to restart.

My apologies to the list if the requested information is in the manual and I 
didn't find it. I've had a rough time implementing this system per my manager's 
requirements.

I'm use to working with Apache (amoung other systems) that recognizes include 
file-path/-name in the configuration file. Reloading the system is generally 
done from the command prompt.

Again, my apologies.
 
/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
David Gardner
email: djgardner(at)yahoo.com
Yahoo! IM: djgardner
AIM: dgardner09 
Everything is a learning experience, even a mistake.

- Original Message 
From: Chris Howells [EMAIL PROTECTED]
To: bacula-users@lists.sourceforge.net
Sent: Thursday, November 8, 2007 10:08:33 AM
Subject: Re: [Bacula-users] Include statements in bacula-dir.conf file


David Gardner wrote:

  Along the same line of reasoning, can the developers add the
 reload 
  argument to the Bacula executable? This would be nice when a small
  tweak to the bacula-dir.conf file has been made and does not require
 
a  full restart of the system.

Already there, see the manual.

xxx:~# bconsole
Connecting to Director xxx:9101
1000 OK: xxx-dir Version: 2.2.5 (09 October 2007)
Enter a period to cancel a command.
*reload
You have messages.
*

And by full restart of the system you don't mean restart the entire 
computer, do you? ;)

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now  http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users





__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now  http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Include statements in bacula-dir.conf file

2007-11-08 Thread David Gardner
I'd like to again request the reload argument be an option to the bacula 
executable.

Perhaps I'm being a little too paranoid for my own good but if the last 
paragraph of the bconsole | reload section on pg 264 of the user guide is any 
indication, I would not feel comfortable adding new clients to a production 
situation in this manner.
 
/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
David Gardner
email: djgardner(at)yahoo.com
Yahoo! IM: djgardner
AIM: dgardner09 
Everything is a learning experience, even a mistake.

- Original Message 
From: Chris Howells [EMAIL PROTECTED]
To: bacula-users@lists.sourceforge.net
Sent: Thursday, November 8, 2007 10:08:33 AM
Subject: Re: [Bacula-users] Include statements in bacula-dir.conf file


David Gardner wrote:

  Along the same line of reasoning, can the developers add the
 reload 
  argument to the Bacula executable? This would be nice when a small
  tweak to the bacula-dir.conf file has been made and does not require
 
a  full restart of the system.

Already there, see the manual.

xxx:~# bconsole
Connecting to Director xxx:9101
1000 OK: xxx-dir Version: 2.2.5 (09 October 2007)
Enter a period to cancel a command.
*reload
You have messages.
*

And by full restart of the system you don't mean restart the entire 
computer, do you? ;)

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now  http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users





__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now  http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Include statements in bacula-dir.conf file

2007-11-08 Thread John Drescher
On Nov 8, 2007 1:36 PM, David Gardner [EMAIL PROTECTED] wrote:
 I'd like to again request the reload argument be an option to the bacula 
 executable.

 Perhaps I'm being a little too paranoid for my own good but if the last 
 paragraph of the bconsole | reload section on pg 264 of the user guide is any 
 indication, I would not feel comfortable adding new clients to a production 
 situation in this manner.


Using the reload command is totally fine. I have used it several
hundred times in the 4 years I have used bacula at home and at work.
The only thing is you probably want to issue a test first:

 bacula-dir -t /etc/bacula/bacula-dir.conf


This will test if your changes to the config files are correct. A good
reason for this is in the past if you issued a reload command from the
console and your config file was bad bacula would crash. Testing
eliminates this problem.

John

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now  http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Include statements in bacula-dir.conf file

2007-11-08 Thread David Gardner
John,

Found it on pg 156 in the section related to including filenames in a fileset 
resource. Feel like I'm studying for MCSE exam...
 
/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
David Gardner
email: djgardner(at)yahoo.com
Yahoo! IM: djgardner
AIM: dgardner09 
Everything is a learning experience, even a mistake.

- Original Message 
From: John Drescher [EMAIL PROTECTED]
To: David Gardner [EMAIL PROTECTED]
Cc: bacula-users@lists.sourceforge.net
Sent: Thursday, November 8, 2007 10:02:14 AM
Subject: Re: [Bacula-users] Include statements in bacula-dir.conf file


On Nov 8, 2007 12:55 PM, David Gardner [EMAIL PROTECTED] wrote:
 Hey gang,

 Can I break the bacula-dir.conf file in parts for easier management?
 Specifically, does bacula understand the include (or some variation
 thereof) to read in the various parts of the bacula-dir.conf file?

 Along the same line of reasoning, can the developers add the reload
 argument to the Bacula executable? This would be nice when a small
 tweak to the bacula-dir.conf file has been made and does not require a
 full restart of the system.



Here is my include section from my bacula-dir.conf file at home. The
key here is the @ symbol. BTW, this is in the manual.

@/etc/bacula/bacula-dir-filesets.conf
@/etc/bacula/bacula-dir-jobs.conf
@/etc/bacula/bacula-dir-jobdefs.conf
@/etc/bacula/bacula-dir-clients.conf
@/etc/bacula/bacula-dir-storage.conf
@/etc/bacula/bacula-dir-pools.conf
@/etc/bacula/bacula-dir-schedules.conf


John





__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now  http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Tape devices and tapeinfo devices (Problems with the list resolved :)

2007-11-08 Thread Chris Howells
Augusto Camarotti wrote:

 I have a Seagate Tape Driver DAT72.
 In my system it`s referenced as /dev/st0.
 So i configured my bacula-sd.conf this way :

Do you actually have an autochanger? It wasn't clear to me from your 
description.

 
 Device {
   Name = DDS-72#
   Media Type = DDS-72
   Archive Device = /dev/st0
   AutomaticMount = yes;   # when device opened, read it
   AlwaysOpen = no;
   RemovableMedia = yes;
   RandomAccess = no;
 # Changer Command = /etc/bacula/mtx-changer %c %o %S %a %d
   Changer Device = /dev/st0

Won't work, st0 and nst0 are tape devices, not autochanger devices. You 
need the scsi generic sgN device.


 Am I doing the correct thing? Why is that? Am I supposed to use /dev/sg4 as
 my tape device? Cause i`m having problems with it, which I `ll explain in my

No. sg4 would be your autochanger (if you have one).

Autochangers and tape drives are controlled separately using different 
devices.

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now  http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Strange problem reusing a tape

2007-11-08 Thread Arno Lehmann
Hi,

08.11.2007 19:37,, Augusto Camarotti wrote::
 Im using Bacula 2.2.4 .
 
 Everything was doing fine and was doing my diary cicle just as usual. 
 Then, this error starts to happen everytime Bacula recycles a tape and 
 try to reuse it.
 One real example :
 
 Yesterday(Wednesday) Bacula canceled my last job (I`m using Max Wait 
 Time=5h) and said this :
 
 07-Nov 22:00 infoserver-dir: BeforeJob: run command 
 /etc/bacula/antes_do_backup-prpb2.sh
 07-Nov 22:00 infoserver-dir: Start Backup JobId 63, 
 Job=BackupPrpb2.2007-11-07_22.00.00
 08-Nov 03:00 infoserver-dir: BackupPrpb2.2007-11-07_22.00.00 Fatal 
 error: Max wait time exceeded. Job canceled.
 08-Nov 03:00 infoserver-sd: Job BackupPrpb2.2007-11-07_22.00.00 marked 
 to be canceled.
 08-Nov 03:00 infoserver-dir: 3000 Job BackupPrpb2.2007-11-07_22.00.00 
 marked to be canceled.
 08-Nov 03:00 infoserver-sd: Failed command: Jmsg 

I've never seen this. Could it be that the SD or DIR are running on a 
machine with serious hardware problems?

 Job=BackupPrpb2.2007-11-07_22.00.00 type=6 level=1194505235 
 infoserver-sd: Job BackupPrpb2.2007-11-07_22.00.00 marked to be canceled.
 
 08-Nov 03:00 infoserver-sd: BackupPrpb2.2007-11-07_22.00.00 Fatal error:
  Device DDS-72 with MediaType DDS-72 requested by DIR not found 
 in SD Device resources.

Looks like the SD is either misconfigured (unlikely if other jobs ran 
correctly) or suffers from hardware problems.

 08-Nov 03:00 infoserver-dir: BackupPrpb2.2007-11-07_22.00.00 Fatal error:
  Storage daemon didn't accept Device DDS-72 because:
  3924 Device DDS-72 not in SD Device resources.

Or your SD was (re-)started with the wrong configuration.

 08-Nov 03:00 infoserver-dir: Bacula infoserver-dir 2.2.4 (14Sep07): 
 08-Nov-2007 03:00:58
   Build OS:   i686-pc-linux-gnu suse 10.1
   JobId:  63
   Job:BackupPrpb2.2007-11-07_22.00.00
   Backup Level:   Full
   Client: infoserver-fd 2.2.4 (14Sep07) 
 i686-pc-linux-gnu,suse,10.1
   FileSet:PRPB_SERVER2 2007-09-28 11:27:55
   Pool:   Quarta (From Run pool override)
   Storage:DDS-72 (From Job resource)
   Scheduled time: 07-Nov-2007 22:00:00
   Start time: 07-Nov-2007 22:00:02
   End time:   08-Nov-2007 03:00:58
   Elapsed time:   5 hours 56 secs
   Priority:   10
   FD Files Written:   0
   SD Files Written:   0
   FD Bytes Written:   0 (0 B)
   SD Bytes Written:   0 (0 B)
   Rate:   0.0 KB/s
   Software Compression:   None
   VSS:no
   Encryption: no
   Volume name(s):
   Volume Session Id:  5
   Volume Session Time:1194285486
   Last Volume Bytes:  0 (0 B)
   Non-fatal FD errors:0
   SD Errors:  0
   FD termination status: 
   SD termination status: 
   Termination:Backup Canceled
 
 Ok, He didn`t get the proper tape to use. But then, in the other day 
 ,when i saw this message,  i tried to mount the tape that was in the 
 drive and got this :
 
 mount
 Select Storage resource (1-2): block.c:275 Volume data error at 0:0! 
 Wanted ID: BB02, got . Buffer discarded.

Again - Hardware problems. In this case it might be a rancid DDS tape. 
It wouldn't be the first...

 3902 Cannot mount Volume on Storage Device DDS-72 (/dev/st0) because:
 Requested Volume  on DDS-72 (/dev/st0) is not a Bacula labeled 
 Volume, because: ERR=block.c:275 Volume data error at 0:0! Wanted ID: 
 BB02, got . Buffer discarded.
 3905 Device DDS-72 (/dev/st0) open but no Bacula volume is mounted.
 If this is not a blank tape, try unmounting and remounting the Volume.
 
 Well, it seems like the tape it`s blank, but if you watch the media list 
 we have :
 Pool: Quarta
 +-++---+-++--+--+-+--+---+---+-+
  
 
 | MediaId | VolumeName | VolStatus | Enabled | VolBytes   | VolFiles 
 | VolRetention | Recycle | Slot | InChanger | MediaType | 
 LastWritten |
 +-++---+-++--+--+-+--+---+---+-+
 |  15 | quarta | Used  |   1 | 21,654,484,992 |   22 
 |  518,400 |   1 |0 | 0 | DDS-72| 2007-10-31 
 23:25:20 |
 +-++---+-++--+--+-+--+---+---+-+
 It was used last Wednesday just fine. So the tape got erased from 
 nothing(I guess). Anyone have a clue on this problem?

DDS.

Quite unreliable, in my experience. Especially if the drive or the 
tape are really heavily used.


Arno

 Regards,
 
 Augusto Camarotti
 
 
 
 
 
 

Re: [Bacula-users] MySQL (Innodb) - table is full ???

2007-11-08 Thread Mike Seda
Hi All,
My bacula database (with MyISAM tables) is currently 5.3 GB in size 
after only 10 months of use.

Last weekend my File table filled up, which was easily fixed by doing 
the following as recommended at 
http://www.bacula.org/dev-manual/Catalog_Maintenance.html#SECTION00244
 
:
ALTER TABLE File MAX_ROWS=281474976710656;

But, the above command made me wonder if I will fill the File table 
again in the future. It also made me consider migrating my tables from 
MyISAM to InnoDB. Do you think the migration is worth the hassle? I 
should mention that I do AutoPrune my normal backups, but I must keep my 
archival backups indefinitely. These archival backups total over 2 TB 
per month.

Btw, with the rate at which my users generate data it is conceivable 
that the normal and archival backups will continue to grow in size. Fyi, 
my autochanger is stackable, which means that I can just buy another 
unit and have 38 new slots (and possible 2 more drives) instantly 
available within the same storage resource. I mention this to denote 
that I am only worried about the limitations of my *database* storage 
not tape storage.

Any thoughts?

Regards,
Mike


Drew Bentley wrote:
 On 8/17/07, Alan Brown [EMAIL PROTECTED] wrote:
   
 On Fri, 17 Aug 2007, Drew Bentley wrote:

 
 Yeah, autoextend for InnoDB seems to have bitten you. I usually never
 do this and have monitors to tell me if it's reaching a certain
 threshold, as you're probably not even using all of the InnoDB space
 allocated, as it's not particularly nice in giving back space that was
 once used, at least in my experience.
   
 Is there any way to see how much it's actually using?


 

 Not that I'm aware of, only show table status and or innodb status
 will print out the usage. If you perform a full dump and reinsert,
 you're always going to gain usage in space.

 -drew

 -
 This SF.net email is sponsored by: Splunk Inc.
 Still grepping through log files to find problems?  Stop.
 Now Search log events and configuration files using AJAX and a browser.
 Download your FREE copy of Splunk now   http://get.splunk.com/
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users

   


-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now  http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] 'label barcodes' and adding volumes to a particular pool

2007-11-08 Thread Chris Howells
Hi,

Arno Lehmann wrote:

 Run it several times with different slot ranges, like
 'label barcodes slots=1-3 pool=F01' and so on.

Brilliant - thank you. The fact that you can use slots and pool seems to 
be missing from the docs, so I guess I should work on trying to add it :)

 
 It only seems to want to allow me to put the tapes into 
 a single pool. I could allow it to put everything into the default pool 
 but there doesn't seem to be a way of moving volumes to a different pool 
 with 'add' and 'delete' of each one which is a bit of a pain.
 
 Use 'update volume=F01T01L4 pool=F01' and so on. Or use the menu after 
 'update volume=F01T01L4', of course.

Ah great too, very useful :)

Thank you.

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now  http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula using one drive in a Vchanger

2007-11-08 Thread Elie Azar

Hi Arno,

Thanks for the tip; that really helped. I set Prefer Mounted Volume = no
and jobs started getting on the other virtual drives as I expected. I did
not know that this directive affected concurrency...

Anyway, I'm still testing, and of course facing other problems that I'm
working my way through.

When I'm ready I will probably post a short note, and if our version of the
vchanger works well, then I will share it with the group, along with some
write up; but I've got to get it working first.

Thanks again,
Elie Azar





Arno Lehmann wrote:
 
 Hi,
 
 07.11.2007 01:14,, Elie Azar wrote::
 Hi Josh,
 
 I'm not Josh, but perhaps I see something, too :-)
 
 I have upgraded to bacula 2.2.5 and I'm still having the same problems.
 
 It seems like drive-1 in the vchanger is never used. Have you ever seen
 it
 used, and if so, what kind of configuration do I need; I followed the
 instruction in the HowTo document (Rev 0.7.4 2006-12-12). I tried many
 configurations but I still can't get it to run more than one job. If I
 start
 a second job it will fail.
 
 You probably still have the volumes to accept only one Job, and the 
 jobs are probably set up to prefer the same volume.
 
 To get the desired results, you have to carefully adjust the job 
 concurreny settings, and not forget about the Prefer Mounted Volume 
 directive.
 
 By default, Bacula will try to run several jobs to a single volume if 
 one is already mounted.
 
 So either you set up your jobs to use different pools, or set Prefer 
 Mounted Volume to No. Also, the Maximum Concurrent Jobs setting for 
 the storage device should be limited. If you set up your volumes to 
 only accept one Job, yozu should also allow only one job going to the 
 storage devices at the same time.
 
 Does that make sense?
 
 Arno
 
 What I'm trying to accomplish is the following: I created an LVM disk
 using
 2x500GB disks. I created a vchanger with 2 virtual disks to backup to the
 LVM. Originally I created the vchangers with multiple 500GB disks, but I
 changed to use the LVM; that setup didn't work either. Even with one
 vchanger per 500GB disk, I still couldn't start more than one job at a
 time.
 I can the relevant parts of my conf files if that helps.
 
 I would like to run concurrent jobs to backup to different volumes on the
 LVM disk. Bacula doesn't seem to be able to do that. Every time I start
 more
 than one job, each one after the first fails.
 
 Here is a sample console output illustrating this problem:
 
 *run
 A job name must be specified.
 The defined Job resources are:
  1: RestoreFiles
  2: BackupCatalog
  
 Select Job resource (1-122): 99
 Run Backup job
 JobName:  Redmail-FS
 Level:Incremental
 Client:   redmail-fd
 FileSet:  Redmail root dev dev-shm impulse
 Pool: BLV01Pool13 (From Job resource)
 Storage:  BLV01S (From Job resource)
 When: 2007-11-06 15:29:08
 Priority: 10
 OK to run? (yes/mod/no): yes
 Job queued. JobId=13736
 *
 *mes
 06-Nov 15:29 coal-dir JobId 13736: Start Backup JobId 13736,
 Job=Redmail-FS.2007-11-06_15.29.19
 06-Nov 15:29 coal-dir JobId 13736: Using Volume BLV01m01s006 from
 'Scratch' pool.
 06-Nov 15:29 coal-dir JobId 13736: Using Device BLV01-drive-0
 06-Nov 15:29 redmail-fd: DIR and FD clocks differ by 18 seconds, FD
 automatically adjusting.
 06-Nov 15:29 coal-sd JobId 13736: 3301 Issuing autochanger loaded? drive
 0
 command.
 06-Nov 15:29 coal-sd JobId 13736: 3302 Autochanger loaded? drive 0,
 result
 is Slot 5.
 06-Nov 15:29 coal-sd JobId 13736: 3307 Issuing autochanger unload slot
 5,
 drive 0 command.
 06-Nov 15:29 coal-sd JobId 13736: 3304 Issuing autochanger load slot 6,
 drive 0 command.
 06-Nov 15:29 coal-sd JobId 13736: 3305 Autochanger load slot 6, drive
 0,
 status is OK.
 06-Nov 15:29 coal-sd JobId 13736: 3301 Issuing autochanger loaded? drive
 0
 command.
 06-Nov 15:29 coal-sd JobId 13736: 3302 Autochanger loaded? drive 0,
 result
 is Slot 6.
 06-Nov 15:29 coal-sd JobId 13736: Wrote label to prelabeled Volume
 BLV01m01s006 on device BLV01-drive-0
 (/var/lib/bacula/vchanger/BLV01/drive0)
 06-Nov 15:29 coal-dir JobId 13736: Max Volume jobs exceeded. Marking
 Volume
 BLV01m01s006 as Used.
 redmail-fd:  /sys is a different filesystem. Will not descend from /
 into /sys
 *
 *
 *run
 A job name must be specified.
 The defined Job resources are:
  1: RestoreFiles
  2: BackupCatalog
  ...
 
 Select Job resource (1-122): 59
 Run Backup job
 JobName:  Linux2-Test1
 Level:Incremental
 Client:   linux2-fd
 FileSet:  Test Set
 Pool: BLV01Pool13 (From Job resource)
 Storage:  BLV01S (From Job resource)
 When: 2007-11-06 15:29:31
 Priority: 10
 OK to run? (yes/mod/no): yes
 Job queued. JobId=1373*mes
 06-Nov 15:29 coal-dir JobId 13737: Start Backup JobId 13737,
 Job=Linux2-Test1.2007-11-06_15.29.20
 06-Nov 15:29 coal-dir JobId 13737: There are no more Jobs associated with
 Volume BLV01m01s006. Marking it 

Re: [Bacula-users] MySQL (Innodb) - table is full ???

2007-11-08 Thread David Romerstein
On Thu, 8 Nov 2007, Mike Seda wrote:

 Hi All,
 My bacula database (with MyISAM tables) is currently 5.3 GB in size
 after only 10 months of use.

 Last weekend my File table filled up, which was easily fixed by doing
 the following as recommended at
 http://www.bacula.org/dev-manual/Catalog_Maintenance.html#SECTION00244
 :
 ALTER TABLE File MAX_ROWS=281474976710656;

Right. You now have room in your table for data on 281.5 trillion files.

 should mention that I do AutoPrune my normal backups, but I must keep my
 archival backups indefinitely. These archival backups total over 2 TB
 per month.

How many files are in each of your archival backups? At 100 million files 
per backup, you've got room in your DB for 2.8 million backup sessions.

Depending on which version of MySQL you've installed and the filesystem 
the database is stored on, it's possible that you'll eventually run into 
an issue with physical size of the database files, but you're not going to 
run out of rows in the table any time soon.

-- D

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now  http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Prevent Bacula from writing files to Catalog DB?

2007-11-08 Thread Chris Howells
Shon Stephens wrote:
 I remember reading somewhere that its possible to configure a Job so
 that the files and attributes are not added to the Catalog DB.
 However, I can't find this documentation now.

I'm not sure.

 Basically, I have a client with millions of files, and don't want all
 that going into the database. My understanding is that if a backup is

Why? This should only take a few hundred megabytes of catalog space at 
most.

 done without saving this information to the Catalog, that it can only
 be restored in a very specific way, and I won't be able to extract
 files by name.

You won't be able to restore files using 'restore' in bconsole because 
bacula doesn't know what files. you will have to use bls and bextract 
instead.

Basically, I would advise against what you are trying to do, unless you 
want to spend hours and hours every time you want to restore a file.


-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now  http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] continuation tapes

2007-11-08 Thread John Drescher
 Pool {
   Name = ThursdayPool
   Pool Type = Backup
   Recycle = yes
   AutoPrune = yes
   Volume Retention = 19d
   Volume Use Duration = 4d
   Maximum Volume Jobs = 4
 }

 Since 1_Friday_Week_3 was last written 2007-11-02 and more than 4d have
 passed, it's pretty clear that if Bacula was going to mark that volume
 'Used', it would have done so by now. It hasn't

 Short of marking the volume 'Used' manually, how can I ensure that the
 Volume is properly prevented from re-use during the 19 days of Volume
 Retention?

Did you change the Pool params after you labeled the tape?

John

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now  http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] continuation tapes

2007-11-08 Thread Craig White
On Fri, 2007-11-02 at 23:05 +0100, Arno Lehmann wrote:
 Hi,
 
 02.11.2007 21:48,, Craig White wrote::
  On Fri, 2007-11-02 at 21:38 +0100, Arno Lehmann wrote:
  Hi,
 
  02.11.2007 21:26,, Craig White wrote::
  On Fri, 2007-11-02 at 15:08 -0400, John Drescher wrote:
  On 11/2/07, Craig White [EMAIL PROTECTED] wrote:
  I am a bit befuddled.
 
  I have a 'Full' backup which consists of 4 jobs but it extends onto 2
  tapes.
 
  the Pool permits 4 jobs
Maximum Volume Jobs = 4
Volume Retention = 19d
Volume Use Duration = 4d
 
  I believe Maximum Volume Jobs here will allow at most 4 jobs to a
  single volume and since there are only 2 jobs written to the last
  volume it can hold 2 more.
  
  I get that. I can't change the number to 2 because then it would stop
  after the second job and force me into a new tape leaving half the first
  tape filled and not enough room on second tape to complete the next 2
  jobs (the second being a standard BackupCatalog job).
 
  I find it hard to believe that I am the only one who is spanning tapes
  on a single Pool and wanting to have the last tape marked 'Used' instead
  of 'Append'
  True... but as it is, many users try to avoid problems they might run 
  into when jobs are added, or don't run for whatever reason, so they 
  don't limit the volume use by the number of jobs, but rather by the 
  time it can be used after an initial write ;-)
 
  So my suggestion is to use Volume Use Duration instead of the job 
  limit. Depending on your needs, limiting by size might also be useful 
  - Maximum Volume Bytes is the corresponding option.
 
  Does that help?
  
  in that I have set (as noted above) Volume Use Duration to 4d, if it
  marks the set as 'Used' on Tuesday (Friday, Saturday, Sunday, Monday)
  constituting the 4 days and then Bacula respects the 'Volume Retention',
  I am good to go.
 
 In my experience, Bacula does handle retention periods correctly.
 
  As you might expect, I am concerned that the next usage of ThursdayPool,
  that it asks for this Volume because it is still marked 'Append' and I
  would want it to ultimately ask for the one of the ThursdayPool volumes
  whose 'Volume Retention' has expired.
 
 Let's say your volume is marked as used on Thursday, and it's been 
 last written to on Thursday, too, and you've got a retention time of 
 six days, that volume could be recycled next weeks Thursday.
 
 If you swapped the volumes and updated the catalog accordingly, Bacula 
 will prefer volumes in the autochanger, i.e. when it needs a Thursday 
 volume, it will prune, recycle, and use that volume automatically.
 
 I think this is what you need.

OK - here it is the next Thursday.

Tape with 2 jobs from last Friday is not now, nor was ever marked 'Used'
- it is still marked 'Append'

# bconsole
Connecting to Director linserv1.mullenpr.com:9101
1000 OK: LINSERV1 Version: 2.2.5 (09 October 2007)
Enter a period to cancel a command.
*list media pool=ThursdayPool
Automatically selected Catalog: MyCatalog
Using Catalog MyCatalog
+-+---+---+-+-+--+--+-+--+---+---+-+
| MediaId | VolumeName| VolStatus | Enabled | VolBytes|
VolFiles | VolRetention | Recycle | Slot | InChanger | MediaType |
LastWritten |
+-+---+---+-+-+--+--+-+--+---+---+-+
|  10 | 1_Friday_Week_3   | Append|   1 |  39,130,656,768 |
41 |1,641,600 |   1 |0 | 0 | LTO   | 2007-11-02
10:50:33 |
|  11 | 1_Thursday_Week_3 | Full  |   1 | 130,561,191,936 |
132 |1,641,600 |   1 |0 | 0 | LTO   | 2007-11-02
03:30:19 |
|  16 | 1_Friday_Week_1   | Append|   1 |  64,512 |
0 |1,641,600 |   1 |0 | 0 | LTO   | -00-00
00:00:00 |
+-+---+---+-+-+--+--+-+--+---+---+-+

Pool {
  Name = ThursdayPool
  Pool Type = Backup
  Recycle = yes
  AutoPrune = yes
  Volume Retention = 19d
  Volume Use Duration = 4d
  Maximum Volume Jobs = 4
}

Since 1_Friday_Week_3 was last written 2007-11-02 and more than 4d have
passed, it's pretty clear that if Bacula was going to mark that volume
'Used', it would have done so by now. It hasn't

Short of marking the volume 'Used' manually, how can I ensure that the
Volume is properly prevented from re-use during the 19 days of Volume
Retention?

Craig


-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now  http://get.splunk.com/

Re: [Bacula-users] scsi problems

2007-11-08 Thread Ralf Gross
Michael Galloway schrieb:
 [...]
 with these changes implemented, the btape test passes (with a couple of
 modifications to bacula-sd.conf) and the autochanger test passes. 
 
 of of curiosity, what are others with LTO-4 using for scsi adapaters?

LSI20320, no problems with bacula. The only problem is that both HP LTO-4
drives only negotiate to U160 mode. Althought the controller offers U320
mode (min_period=0x08).

mptscsih: ioc0: debug_level=0200h
mptspi: ioc0: id=1 Requested = 0x (  factor = 0x00 @ offset = 0x00 )
  Vendor: HPModel: Ultrium 4-SCSIRev: B12H
  Type:   Sequential-Access  ANSI SCSI revision: 05
mptspi: ioc0: id=1 min_period=0x08 max_offset=0x7f max_width=1
st 5:0:1:0: Attached scsi tape st0
st0: try direct i/o: yes (alignment 512 B)
st 5:0:1:0: Attached scsi generic sg6 type 1


I tested this with an Adaptec 29320 and got the same U160 settings. But I
didn't test bacula with the Adaptec controller, so I can't say if it would had
worked with bacula.

Ralf

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now  http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] 'label barcodes' and adding volumes to a particular pool

2007-11-08 Thread John Drescher
On Nov 8, 2007 7:20 PM, Chris Howells [EMAIL PROTECTED] wrote:
 Hi,

 Arno Lehmann wrote:

  Run it several times with different slot ranges, like
  'label barcodes slots=1-3 pool=F01' and so on.

 Brilliant - thank you. The fact that you can use slots and pool seems to
 be missing from the docs, so I guess I should work on trying to add it :)

 
  It only seems to want to allow me to put the tapes into
  a single pool. I could allow it to put everything into the default pool
  but there doesn't seem to be a way of moving volumes to a different pool
  with 'add' and 'delete' of each one which is a bit of a pain.
 
  Use 'update volume=F01T01L4 pool=F01' and so on. Or use the menu after
  'update volume=F01T01L4', of course.

 Ah great too, very useful :)

One other thing I would like to mention is that with my changer I have
bacula put all the tapes into the Scratch pool and then bacula will
grab tapes from this pool when no tape is available in the pool it is
using. I find this works better for me than pre-allocating tapes for
each pool that I use.

John

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now  http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] VolumeToCatalog connecting to FD

2007-11-08 Thread Ralf Gross
Jason Martin schrieb:
 I performed a scheduled incremental backup (to a file-device) of
 my system, followed by a VolumeToCatalog verify. The verify failed and 
 included
 the following log entries:
 
 07-Nov 23:19 butler-dir JobId 428: Fatal error: verify.c:730
 bdirdfiled: bad attributes from filed n=-2 : No data available
 07-Nov 23:19 butler-dir JobId 428: Fatal error: Network error
 with FD during Verify: ERR=No data available
 07-Nov 23:19 butler-sd JobId 428: Job
 MalVerify.2007-11-07_23.10.03 marked to be canceled.
 07-Nov 23:19 butler-dir JobId 428: Fatal error: No Job status
 returned from FD.
 
 I don't understand why VolumeToCatalog involes the FD -- I would
 think the FD is only involved in a DiskToCatalog check. Any
 suggestions?

Yes, the verify code is in the FD. You can even use an other client's
FD in your config for verify jobs. I use the directors FD for all of
my verify jobs, you have just to add 'client = otherfd' to the job
config. 

Ralf

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now  http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Strange problem reusing a tape

2007-11-08 Thread Augusto Camarotti
Im using Bacula 2.2.4 .

Everything was doing fine and was doing my diary cicle just as usual. Then,
this error starts to happen everytime Bacula recycles a tape and try to
reuse it.
One real example :

Yesterday(Wednesday) Bacula canceled my last job (I`m using Max Wait
Time=5h) and said this :

07-Nov 22:00 infoserver-dir: BeforeJob: run command
/etc/bacula/antes_do_backup-prpb2.sh
07-Nov 22:00 infoserver-dir: Start Backup JobId 63, Job=
BackupPrpb2.2007-11-07_22.00.00
08-Nov 03:00 infoserver-dir: BackupPrpb2.2007-11-07_22.00.00 Fatal error:
Max wait time exceeded. Job canceled.
08-Nov 03:00 infoserver-sd: Job BackupPrpb2.2007-11-07_22.00.00 marked to be
canceled.
08-Nov 03:00 infoserver-dir: 3000 Job BackupPrpb2.2007-11-07_22.00.00 marked
to be canceled.
08-Nov 03:00 infoserver-sd: Failed command: Jmsg Job=
BackupPrpb2.2007-11-07_22.00.00 type=6 level=1194505235 infoserver-sd: Job
BackupPrpb2.2007-11-07_22.00.00 marked to be canceled.

08-Nov 03:00 infoserver-sd: BackupPrpb2.2007-11-07_22.00.00 Fatal error:
 Device DDS-72 with MediaType DDS-72 requested by DIR not found in
SD Device resources.
08-Nov 03:00 infoserver-dir: BackupPrpb2.2007-11-07_22.00.00 Fatal error:
 Storage daemon didn't accept Device DDS-72 because:
 3924 Device DDS-72 not in SD Device resources.
08-Nov 03:00 infoserver-dir: Bacula infoserver-dir 2.2.4 (14Sep07):
08-Nov-2007 03:00:58
  Build OS:   i686-pc-linux-gnu suse 10.1
  JobId:  63
  Job:BackupPrpb2.2007-11-07_22.00.00
  Backup Level:   Full
  Client: infoserver-fd 2.2.4 (14Sep07)
i686-pc-linux-gnu,suse,10.1
  FileSet:PRPB_SERVER2 2007-09-28 11:27:55
  Pool:   Quarta (From Run pool override)
  Storage:DDS-72 (From Job resource)
  Scheduled time: 07-Nov-2007 22:00:00
  Start time: 07-Nov-2007 22:00:02
  End time:   08-Nov-2007 03:00:58
  Elapsed time:   5 hours 56 secs
  Priority:   10
  FD Files Written:   0
  SD Files Written:   0
  FD Bytes Written:   0 (0 B)
  SD Bytes Written:   0 (0 B)
  Rate:   0.0 KB/s
  Software Compression:   None
  VSS:no
  Encryption: no
  Volume name(s):
  Volume Session Id:  5
  Volume Session Time:1194285486
  Last Volume Bytes:  0 (0 B)
  Non-fatal FD errors:0
  SD Errors:  0
  FD termination status:
  SD termination status:
  Termination:Backup Canceled

Ok, He didn`t get the proper tape to use. But then, in the other day ,when i
saw this message,  i tried to mount the tape that was in the drive and got
this :

mount
Select Storage resource (1-2): block.c:275 Volume data error at 0:0! Wanted
ID: BB02, got . Buffer discarded.
3902 Cannot mount Volume on Storage Device DDS-72 (/dev/st0) because:
Requested Volume  on DDS-72 (/dev/st0) is not a Bacula labeled Volume,
because: ERR=block.c:275 Volume data error at 0:0! Wanted ID: BB02, got
. Buffer discarded.
3905 Device DDS-72 (/dev/st0) open but no Bacula volume is mounted.
If this is not a blank tape, try unmounting and remounting the Volume.

Well, it seems like the tape it`s blank, but if you watch the media list we
have :
Pool: Quarta
+-++---+-++--+--+-+--+---+---+-+

| MediaId | VolumeName | VolStatus | Enabled | VolBytes   | VolFiles |
VolRetention | Recycle | Slot | InChanger | MediaType | LastWritten
|
+-++---+-++--+--+-+--+---+---+-+
|  15 | quarta | Used  |   1 | 21,654,484,992 |   22
|  518,400 |   1 |0 | 0 | DDS-72| 2007-10-31
23:25:20 |
+-++---+-++--+--+-+--+---+---+-+
It was used last Wednesday just fine. So the tape got erased from nothing(I
guess). Anyone have a clue on this problem?

Regards,

Augusto Camarotti
-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now  http://get.splunk.com/___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Odd Restores

2007-11-08 Thread Michael Short
I can't say for sure, but you could have a JobDef assigned to the
restore job configuration which contains a schedule. However, I
haven't ever heard of jobs just starting by themselves.

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now  http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Tapes(Volumes) put into ERROR status

2007-11-08 Thread Win Htin
Hi Arno,

Added Two EOF = Yes and re-ran the btape test and the problem disappeared.

Thanks again!!

Win







 Hi,

 08.11.2007 15:17,, Win Htin wrote::
  Hi folks,
 
  Just as I thought I have fixed my problems, I am again stumped with
  Volumes being put into Error with  /*Error: Unable to position to end
  of data on device*/ messages.
 
  As previously mentioned in my other post, all the tapes were erased once
  over to make sure everything starts cleanly. The tapes were bought brand
  new from IBM and some were used previously for my Bacula tests (btape +
  actual backups).
 
  analysing the pattern, I suspect once all the backups are done EOF/EOD
  is not written to the tape/volume causing next run(s) to put the tape
  into Error and use the next available volume.

 Have you set Two EOF = Yes in the SD configuration? Seems like
 setting that, and BSF at EOM, too, might help you... you'll have to try.

 Arno

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now  http://get.splunk.com/___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Prevent Bacula from writing files to Catalog DB?

2007-11-08 Thread Arno Lehmann
Hi,

08.11.2007 21:56,, Shon Stephens wrote::
 I remember reading somewhere that its possible to configure a Job so
 that the files and attributes are not added to the Catalog DB.
 However, I can't find this documentation now.

In the pool you use, set Catalog Files = No.

This may get you into all sorts of trouble later, though...

 Basically, I have a client with millions of files, and don't want all
 that going into the database. My understanding is that if a backup is
 done without saving this information to the Catalog, that it can only
 be restored in a very specific way, and I won't be able to extract
 files by name.

Yes, and this is the most problematic thing here... to restore a 
single file from that backup, you'll either have to use bextract, 
possibly loading the complete backup from the volumes to a disk, or 
bscan to add the contents to the catalog.

I wouldn't do it - the catalog will most probably not break when you 
have a a few millions of file entries more.

Arno

 Can anyone clue me in re: this type of backup?
 
 Thanks,
 Shon
 
 -
 This SF.net email is sponsored by: Splunk Inc.
 Still grepping through log files to find problems?  Stop.
 Now Search log events and configuration files using AJAX and a browser.
 Download your FREE copy of Splunk now  http://get.splunk.com/
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users
 

-- 
Arno Lehmann
IT-Service Lehmann
www.its-lehmann.de

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now  http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] continuation tapes

2007-11-08 Thread Craig White
On Thu, 2007-11-08 at 16:19 -0500, John Drescher wrote:
  Pool {
Name = ThursdayPool
Pool Type = Backup
Recycle = yes
AutoPrune = yes
Volume Retention = 19d
Volume Use Duration = 4d
Maximum Volume Jobs = 4
  }
 
  Since 1_Friday_Week_3 was last written 2007-11-02 and more than 4d have
  passed, it's pretty clear that if Bacula was going to mark that volume
  'Used', it would have done so by now. It hasn't
 
  Short of marking the volume 'Used' manually, how can I ensure that the
  Volume is properly prevented from re-use during the 19 days of Volume
  Retention?
 
 Did you change the Pool params after you labeled the tape?

I am fairly certain that I haven't - this is now my 4th setup of
Bacula...but the first one where the 'Full' backup spans with 2 jobs on
the first tape and then 2 jobs on the second tape.

I double checked though...nothing changed (volume_id 10 is the subject
tape)

see below

Craig

*update
Update choice:
 1: Volume parameters
 2: Pool from resource
 3: Slots from autochanger
Choose catalog item to update (1-3): 1
Parameters to modify:
 1: Volume Status
 2: Volume Retention Period
 3: Volume Use Duration
 4: Maximum Volume Jobs
 5: Maximum Volume Files
 6: Maximum Volume Bytes
 7: Recycle Flag
 8: Slot
 9: InChanger Flag
10: Volume Files
11: Pool
12: Volume from Pool
13: All Volumes from Pool
14: Enabled
15: RecyclePool
16: Done
Select parameter to modify (1-16): 4
Defined Pools:
 1: Default
 2: MondayPool
 3: TuesdayPool
 4: WednesdayPool
 5: ThursdayPool
 6: FridayPool
 7: CarterPool
Select the Pool (1-7): 5
snip
Enter MediaId or Volume name: 10
Updating Volume 1_Friday_Week_3
Current max jobs is: 4
Enter new Maximum Jobs: 4
New max jobs is: 4
Parameters to modify:
 1: Volume Status
 2: Volume Retention Period
 3: Volume Use Duration
 4: Maximum Volume Jobs
 5: Maximum Volume Files
 6: Maximum Volume Bytes
 7: Recycle Flag
 8: Slot
 9: InChanger Flag
10: Volume Files
11: Pool
12: Volume from Pool
13: All Volumes from Pool
14: Enabled
15: RecyclePool
16: Done
Select parameter to modify (1-16): 2
Defined Pools:
 1: Default
 2: MondayPool
 3: TuesdayPool
 4: WednesdayPool
 5: ThursdayPool
 6: FridayPool
 7: CarterPool
Select the Pool (1-7): 5
snip
Enter MediaId or Volume name: 10
Updating Volume 1_Friday_Week_3
Current retention period is: 19 days
Enter Volume Retention period: 19
New retention period is: 19 days
Parameters to modify:
 1: Volume Status
 2: Volume Retention Period
 3: Volume Use Duration
 4: Maximum Volume Jobs
 5: Maximum Volume Files
 6: Maximum Volume Bytes
 7: Recycle Flag
 8: Slot
 9: InChanger Flag
10: Volume Files
11: Pool
12: Volume from Pool
13: All Volumes from Pool
14: Enabled
15: RecyclePool
16: Done
Select parameter to modify (1-16): 16
Selection terminated.



-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now  http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Tape devices and tapeinfo devices (Problems with the list resolved :)

2007-11-08 Thread Arno Lehmann
Hi,

08.11.2007 19:18,, Augusto Camarotti wrote::
 By the way, I just decided to use my gmail account for subscribing 
 bacula-users. It`s much reliable than my organization e-mail which was 
 giving me so much trouble. :D
 
 I have a Seagate Tape Driver DAT72.
 In my system it`s referenced as /dev/st0.
 So i configured my bacula-sd.conf this way :
 
 Device {
   Name = DDS-72#
   Media Type = DDS-72
   Archive Device = /dev/st0
   AutomaticMount = yes;   # when device opened, read it
   AlwaysOpen = no;
   RemovableMedia = yes;
   RandomAccess = no;
 # Changer Command = /etc/bacula/mtx-changer %c %o %S %a %d
   Changer Device = /dev/st0

Try /dev/sg0 or whatever raw SCSI device your tape drive is.

   AutoChanger = no
   # Enable the Alert command only if you have the mtx package loaded
   Alert Command = sh -c 'tapeinfo -f %c |grep TapeAlert|cat'
 # If you have smartctl, enable this, it has more info than tapeinfo
 # Alert Command = sh -c 'smartctl -H -l error %c'
 }
 
 But it was giving this erros using tapeinfo with that device (/dev/st0) :
 
 mtx: Request Sense: Long Report=yes
 mtx: Request Sense: Valid Residual=no
 mtx: Request Sense: Error Code=0 (Unknown?!)
 mtx: Request Sense: Sense Key=No Sense
 mtx: Request Sense: FileMark=no
 mtx: Request Sense: EOM=no
 mtx: Request Sense: ILI=no
 mtx: Request Sense: Additional Sense Code = 00
 mtx: Request Sense: Additional Sense Qualifier = 00
 mtx: Request Sense: BPV=no
 mtx: Request Sense: Error in CDB=no
 mtx: Request Sense: SKSV=no
 INQUIRY Command Failed
 
 And this was showing after each job of Bacula. Then I discover that my 
 tape device have another name, /dev/sg4, that shows this when given to 
 tapeinfo :
 
 Product Type: Tape Drive
 Vendor ID: 'SEAGATE '
 Product ID: 'DATDAT72-052'
 Revision: 'A060'
 Attached Changer: No
 SerialNumber: 'HV07G3D'
 MinBlock:1
 MaxBlock:16777215
 SCSI ID: 6
 SCSI LUN: 0
 Ready: yes
 BufferedMode: yes
 Medium Type: 0x35
 Density Code: 0x47
 BlockSize: 0
 DataCompEnabled: yes
 DataCompCapable: yes
 DataDeCompEnabled: yes
 CompType: 0x20
 DeCompType: 0x20
 BOP: yes
 Block Position: 0
 
 So I change my storage configuration to that and I`ll be good?
 
 Device {
   Name = DDS-72#
   Media Type = DDS-72
   Archive Device = /dev/st0
   AutomaticMount = yes;   # when device opened, read it
   AlwaysOpen = no;
   RemovableMedia = yes;
   RandomAccess = no;
 # Changer Command = /etc/bacula/mtx-changer %c %o %S %a %d
   Changer Device = /dev/sg4

See above - this seems to be your tape drive, so it should be ok.

   AutoChanger = no
   # Enable the Alert command only if you have the mtx package loaded
   Alert Command = sh -c 'tapeinfo -f %c |grep TapeAlert|cat'
 # If you have smartctl, enable this, it has more info than tapeinfo
 # Alert Command = sh -c 'smartctl -H -l error %c'
 }
 
 Am I doing the correct thing? Why is that? Am I supposed to use /dev/sg4 
 as my tape device? Cause i`m having problems with it, which I `ll 
 explain in my next e-mail.

Yes. Because tapeinfo needs the SCSI generic device, not the tape 
driver to talk to. No. And sure. :-)

Arno

 Regards,
 
 Augusto Camarotti
 
 
 
 
 -
 This SF.net email is sponsored by: Splunk Inc.
 Still grepping through log files to find problems?  Stop.
 Now Search log events and configuration files using AJAX and a browser.
 Download your FREE copy of Splunk now  http://get.splunk.com/
 
 
 
 
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users

-- 
Arno Lehmann
IT-Service Lehmann
www.its-lehmann.de

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now  http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Prevent Bacula from writing files to Catalog DB?

2007-11-08 Thread Shon Stephens
I remember reading somewhere that its possible to configure a Job so
that the files and attributes are not added to the Catalog DB.
However, I can't find this documentation now.

Basically, I have a client with millions of files, and don't want all
that going into the database. My understanding is that if a backup is
done without saving this information to the Catalog, that it can only
be restored in a very specific way, and I won't be able to extract
files by name.

Can anyone clue me in re: this type of backup?

Thanks,
Shon

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now  http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] continuation tapes

2007-11-08 Thread Arno Lehmann
Hi,

08.11.2007 22:05,, Craig White wrote::
 On Fri, 2007-11-02 at 23:05 +0100, Arno Lehmann wrote:
 Hi,

 02.11.2007 21:48,, Craig White wrote::
 On Fri, 2007-11-02 at 21:38 +0100, Arno Lehmann wrote:
 Hi,

 02.11.2007 21:26,, Craig White wrote::
 On Fri, 2007-11-02 at 15:08 -0400, John Drescher wrote:
 On 11/2/07, Craig White [EMAIL PROTECTED] wrote:
 I am a bit befuddled.

 I have a 'Full' backup which consists of 4 jobs but it extends onto 2
 tapes.

 the Pool permits 4 jobs
   Maximum Volume Jobs = 4
   Volume Retention = 19d
   Volume Use Duration = 4d

 I believe Maximum Volume Jobs here will allow at most 4 jobs to a
 single volume and since there are only 2 jobs written to the last
 volume it can hold 2 more.
 
 I get that. I can't change the number to 2 because then it would stop
 after the second job and force me into a new tape leaving half the first
 tape filled and not enough room on second tape to complete the next 2
 jobs (the second being a standard BackupCatalog job).

 I find it hard to believe that I am the only one who is spanning tapes
 on a single Pool and wanting to have the last tape marked 'Used' instead
 of 'Append'
 True... but as it is, many users try to avoid problems they might run 
 into when jobs are added, or don't run for whatever reason, so they 
 don't limit the volume use by the number of jobs, but rather by the 
 time it can be used after an initial write ;-)

 So my suggestion is to use Volume Use Duration instead of the job 
 limit. Depending on your needs, limiting by size might also be useful 
 - Maximum Volume Bytes is the corresponding option.

 Does that help?
 
 in that I have set (as noted above) Volume Use Duration to 4d, if it
 marks the set as 'Used' on Tuesday (Friday, Saturday, Sunday, Monday)
 constituting the 4 days and then Bacula respects the 'Volume Retention',
 I am good to go.
 In my experience, Bacula does handle retention periods correctly.

 As you might expect, I am concerned that the next usage of ThursdayPool,
 that it asks for this Volume because it is still marked 'Append' and I
 would want it to ultimately ask for the one of the ThursdayPool volumes
 whose 'Volume Retention' has expired.
 Let's say your volume is marked as used on Thursday, and it's been 
 last written to on Thursday, too, and you've got a retention time of 
 six days, that volume could be recycled next weeks Thursday.

 If you swapped the volumes and updated the catalog accordingly, Bacula 
 will prefer volumes in the autochanger, i.e. when it needs a Thursday 
 volume, it will prune, recycle, and use that volume automatically.

 I think this is what you need.
 
 OK - here it is the next Thursday.
 
 Tape with 2 jobs from last Friday is not now, nor was ever marked 'Used'
 - it is still marked 'Append'
 
 # bconsole
 Connecting to Director linserv1.mullenpr.com:9101
 1000 OK: LINSERV1 Version: 2.2.5 (09 October 2007)
 Enter a period to cancel a command.
 *list media pool=ThursdayPool
 Automatically selected Catalog: MyCatalog
 Using Catalog MyCatalog
 +-+---+---+-+-+--+--+-+--+---+---+-+
 | MediaId | VolumeName| VolStatus | Enabled | VolBytes|
 VolFiles | VolRetention | Recycle | Slot | InChanger | MediaType |
 LastWritten |
 +-+---+---+-+-+--+--+-+--+---+---+-+
 |  10 | 1_Friday_Week_3   | Append|   1 |  39,130,656,768 |
 41 |1,641,600 |   1 |0 | 0 | LTO   | 2007-11-02
 10:50:33 |
 |  11 | 1_Thursday_Week_3 | Full  |   1 | 130,561,191,936 |
 132 |1,641,600 |   1 |0 | 0 | LTO   | 2007-11-02
 03:30:19 |
 |  16 | 1_Friday_Week_1   | Append|   1 |  64,512 |
 0 |1,641,600 |   1 |0 | 0 | LTO   | -00-00
 00:00:00 |
 +-+---+---+-+-+--+--+-+--+---+---+-+
 
 Pool {
   Name = ThursdayPool
   Pool Type = Backup
   Recycle = yes
   AutoPrune = yes
   Volume Retention = 19d
   Volume Use Duration = 4d
   Maximum Volume Jobs = 4
 }
 
 Since 1_Friday_Week_3 was last written 2007-11-02 and more than 4d have
 passed, it's pretty clear that if Bacula was going to mark that volume
 'Used', it would have done so by now. It hasn't

It will only be marked Used when Bacula looks for a usable volume 
and considers it. Bacula does not have an internal marker to indicate 
that, on 2007-11-06 (or whatever the result of FirstWritten+4d would 
be) this tape has to marked as used. Instead, only when considering 
this as a volume to use, it encounters the state of 
(FirstWritten+VolUseDurationnom() if that makes sense...) and sets 
the status accordingly.

 
 Short of marking the 

Re: [Bacula-users] differential backups not reqiered?

2007-11-08 Thread Mike Seda
Hi All,
I wish to save space on my tapes by changing backup levels in my 
Monthly Schedule resource.

My current Monthly Schedule resource is:
Schedule {
  Name = Monthly
  Run = Level=Full Pool=Monthly 1st fri at 23:05
  Run = Level=Full Pool=Weekly 2nd-5th fri at 23:05
  Run = Level=Differential Pool=Weekly sat-thu at 23:05
}

I wish to change the aforementioned resource to:
Schedule {
  Name = Monthly
  Run = Level=Full Pool=Monthly 1st fri at 23:05
  Run = Level=Differential Pool=Weekly 2nd-5th fri at 23:05
  Run = Level=Incremental Pool=Daily sat-thu at 23:05
}

Btw, with my proposed changes to the backup levels, I plan to use the 
following Volume Retention times while having Recycle = yes and 
AutoPrune = yes :
Monthly = 372 days
Weekly = 37 days
Daily = 14 days

Do the proposed Schedule and Volume Retention times seem reasonable? I 
added 7 days to each time as a fudge-factor. Any critiques and/or advice 
is welcome.

Also, I have read that Bacula will not backup files that have a ctime 
less than the last backup (of any level, i.e. Full, Differential, 
Incremental). Is that still true with Bacula 2.0.1? If so, is there 
anyway to circumvent this? Basically, will changing the 2nd, 3rd, 4th, 
and 5th Friday backups from Full to Differentials put my data at more of 
a risk if my users perform moves and untars? I feel that if I tell my 
users (one of which is my boss) to touch all files/dirs after a move or 
untar, he will then tell me to go get a quote for some new backup 
software. This is not an option for me as I am a bigtime open-source 
evangelist, and Bacula has served me well thus far. Please advise.

Thx,
Mike


Kern Sibbald wrote:
 On Friday 03 November 2006 11:27, Jaap Stolk wrote:
   
 On 11/3/06, Jaap Stolk [EMAIL PROTECTED] wrote:
 
 I was wondering if i could do without the differential backups 
   
 altogether ?
   
 (in reply to my own post)
 I did some more reading and found that the differential backup only
 looks at the file date/time, exactly like the incremental backup. so
 this is no reason to use a differential backup.
 

 Except that doing Differential backups allows you to restore faster and to 
 recycle your Incremental backups faster.

   
 The other thing is a differential backup would reduce the number of
 incremental backups i need to scan when restoring files, but since i
 backup to a file this doesn't involve manual tape changes, so this is
 also not a problem in my case.

 I think i can detect files that are missed in the incremental backup
 (because of an old file timestamp) using a verify job, and either
 manual or automatically touch these files, so they will be backed up
 in the next incremental backup.
 

 That's an interesting idea ...

   
 so i have no further questions, unless someone sees a bog problem in my 
 
 setup.
   
 Kind regards,
 Jaap Stolk

 -
 Using Tomcat but need to do more? Need to support web services, security?
 Get stuff done quickly with pre-integrated technology to make your job 
 
 easier
   
 Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
 http://sel.as-us.falkag.net/sel?cmd=lnkkid=120709bid=263057dat=121642
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users

 

 -
 Using Tomcat but need to do more? Need to support web services, security?
 Get stuff done quickly with pre-integrated technology to make your job easier
 Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
 http://sel.as-us.falkag.net/sel?cmd=lnkkid=120709bid=263057dat=121642
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users
   


-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now  http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] using clone jobs

2007-11-08 Thread Mike Seda
All,
I will soon purchase a second (identical) tape drive for my autochanger, 
which will be used primarily for cloning jobs.

After reading the documentation, it seems that the following (with 
site-specific modifications of job-name and media-type) is all that 
needs to be added to each Job (or perhaps JobDefs) resource to make 
cloning happen:
run = job-name level=%l since=\%s\ storage=media-type

Is this correct?

I suppose that I can have a job cloned to a different pool using the 
storage keyword in that line, but I would much rather say:
run = job-name level=%l since=\%s\ *pool*=*pool-name*

Are there any plans to implement pool as a cloning keyword in the run 
directive? I think that it would be much cleaner to refer to the 
pool-name as opposed to media-type since some folks (like yours truly) 
have one media-type, but multiple pools.

Also, how does Bacula handle cloned jobs when Spool Data = yes. 
Basically, does Bacula then spool once or twice?

Please advise.

Cheers,
Mike


Michael Proto wrote:
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1

 I neglected to mention that I have a 2nd tape drive that I plan to use
 with this clone job, so there shouldn't be an issue with both running
 simultaneously.

 Michael Proto wrote:
   
 As part of my monthly backup cycle, I am required to take 2 full backups
 to 2 different pools at the beginning of each month, one to store
 locally and one to store off-site. Up until now I have been running 2
 independent schedules to take care of these:

 bacula-dir.conf:
 Schedule {
   Name = WeeklyCycle
   Run = Level=Full Pool=MonthlyOffsite Priority=9 1st sat at 1:05
   Run = Level=Full Pool=Monthly Priority=9 1st sun at 1:05
   Run = Level=Incremental Pool=Daily FullPool=Monthly mon-sat at 1:05

 ...

 I see in the Director's Job resource that there is a Run directive
 (not to be confused with the Run directive in the Schedule resource
 above) that can be used to clone jobs. If possible I'd like to use this
 to clone the Level=Full Pool=Monthly backup job shown above to the
 MonthlyOffsite pool. Unfortunately I'm unable to wrap my head around
 this directive based on the single example given in the documentation.

 Here's a sample of one of my Job and JobDef resources:

 JobDefs {
   Name = DefaultJob
   Type = Backup
   Level = Incremental
   FileSet = Full Set
   Schedule = WeeklyCycle
   Storage = ADIC-Library1
   Messages = Standard
   Pool = Daily
   Full Backup Pool = Monthly
   Priority = 10
   Maximum Concurrent Jobs = 1
   Spool Data = yes
 }

 Job {
   Name = archive2
   JobDefs = DefaultJob
   Client = archive2-fd
   FileSet = archive2
 }


 I imagine I'd need a line like this in the archive2 job:

   Run = archive2 level=Full since=\%s\ pool=MonthlyOffsite

 The way I see it, if I put that in the archive2 job resource it will run
 every time a scheduled (incremental or full) job runs for this Job
 resource. Is there a way to have a Run job ONLY run when the other
 monthly full backup is scheduled to run? Or would I need to create
 another Job resource for all of my clients, under another schedule, and
 remove these full backups from my WeeklyCycle schedule?



 Thanks,
 Michael Proto
 

 - -
 Take Surveys. Earn Cash. Influence the Future of IT
 Join SourceForge.net's Techsay panel and you'll get the chance to share your
 opinions on IT  business topics through brief surveys-and earn cash
 http://www.techsay.com/default.php?page=join.phpp=sourceforgeCID=DEVDEV
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users

 - --
 Michael Proto| SecureWorks
 Unix Administrator   |
 PGP ID: 5D575BBE | [EMAIL PROTECTED]
 ***
 -BEGIN PGP SIGNATURE-
 Version: GnuPG v1.4.7 (FreeBSD)

 iD8DBQFGCox/OLq/wl1XW74RAqGCAJ9Ukre8Fs9L/3/fKf1n4u0E6Xlj0gCeLM6q
 jEdKRB10O4pzsTTPolo8rVk=
 =qaww
 -END PGP SIGNATURE-

 -
 Take Surveys. Earn Cash. Influence the Future of IT
 Join SourceForge.net's Techsay panel and you'll get the chance to share your
 opinions on IT  business topics through brief surveys-and earn cash
 http://www.techsay.com/default.php?page=join.phpp=sourceforgeCID=DEVDEV
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users
   


-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now  http://get.splunk.com/
___
Bacula-users 

Re: [Bacula-users] differential backups not reqiered?

2007-11-08 Thread Kern Sibbald
On Friday 09 November 2007 00:59, Mike Seda wrote:
 Hi All,
 I wish to save space on my tapes by changing backup levels in my
 Monthly Schedule resource.

 My current Monthly Schedule resource is:
 Schedule {
   Name = Monthly
   Run = Level=Full Pool=Monthly 1st fri at 23:05
   Run = Level=Full Pool=Weekly 2nd-5th fri at 23:05
   Run = Level=Differential Pool=Weekly sat-thu at 23:05
 }

 I wish to change the aforementioned resource to:
 Schedule {
   Name = Monthly
   Run = Level=Full Pool=Monthly 1st fri at 23:05
   Run = Level=Differential Pool=Weekly 2nd-5th fri at 23:05
   Run = Level=Incremental Pool=Daily sat-thu at 23:05
 }

 Btw, with my proposed changes to the backup levels, I plan to use the
 following Volume Retention times while having Recycle = yes and
 AutoPrune = yes :
 Monthly = 372 days
 Weekly = 37 days
 Daily = 14 days

 Do the proposed Schedule and Volume Retention times seem reasonable? I
 added 7 days to each time as a fudge-factor. Any critiques and/or advice
 is welcome.

 Also, I have read that Bacula will not backup files that have a ctime
 less than the last backup (of any level, i.e. Full, Differential,
 Incremental). Is that still true with Bacula 2.0.1? 

That is true for Differential and Incremental jobs, but not for Full.

 If so, is there anyway to circumvent this? 

No other than touching the files or doing a Full backup.   If it is critical 
data, you could probably do some tricks with Verify jobs which can detect 
these changes, but I don't quite see how.

 Basically, will changing the 2nd, 3rd, 4th, 
 and 5th Friday backups from Full to Differentials put my data at more of
 a risk if my users perform moves and untars? 

Depending on the dates on the files that are moved or detared, they may not be 
backed up by a Differential backup.

I'll let others respond to the other questions.

 I feel that if I tell my 
 users (one of which is my boss) to touch all files/dirs after a move or
 untar, he will then tell me to go get a quote for some new backup
 software. This is not an option for me as I am a bigtime open-source
 evangelist, and Bacula has served me well thus far. Please advise.

 Thx,
 Mike

 Kern Sibbald wrote:
  On Friday 03 November 2006 11:27, Jaap Stolk wrote:
  On 11/3/06, Jaap Stolk [EMAIL PROTECTED] wrote:
  I was wondering if i could do without the differential backups
 
  altogether ?
 
  (in reply to my own post)
  I did some more reading and found that the differential backup only
  looks at the file date/time, exactly like the incremental backup. so
  this is no reason to use a differential backup.
 
  Except that doing Differential backups allows you to restore faster and
  to recycle your Incremental backups faster.
 
  The other thing is a differential backup would reduce the number of
  incremental backups i need to scan when restoring files, but since i
  backup to a file this doesn't involve manual tape changes, so this is
  also not a problem in my case.
 
  I think i can detect files that are missed in the incremental backup
  (because of an old file timestamp) using a verify job, and either
  manual or automatically touch these files, so they will be backed up
  in the next incremental backup.
 
  That's an interesting idea ...
 
  so i have no further questions, unless someone sees a bog problem in my
 
  setup.
 
  Kind regards,
  Jaap Stolk
 
  
 - Using Tomcat but need to do more? Need to support web services,
  security? Get stuff done quickly with pre-integrated technology to make
  your job
 
  easier
 
  Download IBM WebSphere Application Server v.1.0.1 based on Apache
  Geronimo
  http://sel.as-us.falkag.net/sel?cmd=lnkkid=120709bid=263057dat=121642
  ___
  Bacula-users mailing list
  Bacula-users@lists.sourceforge.net
  https://lists.sourceforge.net/lists/listinfo/bacula-users
 
  -
  Using Tomcat but need to do more? Need to support web services, security?
  Get stuff done quickly with pre-integrated technology to make your job
  easier Download IBM WebSphere Application Server v.1.0.1 based on Apache
  Geronimo
  http://sel.as-us.falkag.net/sel?cmd=lnkkid=120709bid=263057dat=121642
  ___
  Bacula-users mailing list
  Bacula-users@lists.sourceforge.net
  https://lists.sourceforge.net/lists/listinfo/bacula-users

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now  http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Best backup strategy with bacula and autochanger

2007-11-08 Thread S. Kremer
Hi Bob
 
 As another pointed out... you'll want to get a higher capacity library
 in place before you hit 6TB of data of course.  Depending on the nature
 of what's on that NAS you may find that you have too much data long
 before you hit 6TB too (i.e. if the data is already compressed a lot
 you'll not  be able to compress it further on the tape drive).

Your are right. Lots of data is compressed at the nas system because most files 
are videos in mpeg format.
At this time the size of data is nearly 2TB. I configure the nas system in 
several partiton: 3 about ~2TB, 1 about ~500GB and ~10GB with the system 
partitons.

 
 I'm big on LTO style hardware for a variety of reasons, but that's not
 what you asked about...
 
 But in the mean time... I never mess around with differentials.  I only
 do incrementals and fulls.  You may find that if you do full backups
 less often than once per week the inconvenience will be quite a bit
 lower.  Plan on keeping the backups more than a month... Unfortunately,
 bacula doesn't do 21 day rotation cycles like other systems so you're
 stuck with either manual intervention or once per month fulls...  If you
 go that route plan on having at least 2 full backups so you're never run
 into a situation where a tape used in a full backup breaks and you lose
 everything...

In fact i would like to hold the backup data for more than one month for 
archiving.
It should be no problem for manual intervention, but i do not know how bacula 
communicate this. Sends bacula a mail message with the notice that one or more 
tape have to change?
Well, if it is better i will be plan only full und incremental backups.

Stefan
-- 
Ist Ihr Browser Vista-kompatibel? Jetzt die neuesten 
Browser-Versionen downloaden: http://www.gmx.net/de/go/browser

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now  http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] duplicate backup to tape

2007-11-08 Thread Colapinto Giovanni
Hi to all.

I've search through doc and mailing list archive, but I've not found
anything, so I've supposed I'm a dummy ;-)

I've the following problem: at this moment I use two software for
backup: backup exec on torino site and bacula on milano site. Bacula now
work only on disk; backup exec work on disk and duplicate backup on
tape.

Reading manual I've found that bacula can do duplicate too, but is named
migration. The only thing I don't understand is the following
statement: As part of this process, the File catalog records
associated with the first backup job are purged. In other words,
Migration moves Bacula Job data from one Volume to another by reading
the Job data from the Volume it is stored on, writing it to a different
Volume in a different Pool, and then purging the database records for the
first Job.

Why it purge the file catalog record of the first job? I want to use
this method only to have another copy of backup in a safe location, why
bacula purge the disk bakup? Is there a way to tell to bacula to not
purge the disk backup?

Thanx

-- 
-
GIOVANNI COLAPINTO
Resp. Logistica e Sistemi Informativi
Assioma.net
Via Quintino Sella 19
20094 Corsico (MI)
Telefono: 02/45055818
Cellulare: 3404945829
Email: [EMAIL PROTECTED]
-
-:-:-:-:-:-:-:-:
Questo messaggio e gli eventuali allegati contengono informazioni riservate. Se 
vi e` stato recapitato per errore e non siete fra i destinatari elencati, siete 
pregati di darne immediatamente avviso al mittente. Le informazioni contenute 
non devono essere mostrate ad altri, ne` utilizzate, memorizzate o copiate in 
qualsiasi forma.
-:-:-:-:-:-:-:-:
This e-mail and any attachments contain reserved information. If you are not 
one of the named recipients, please notify the sender immediately. Moreover, 
you should not disclose the contents to any other person, nor should the 
information contained be used for any purpose or stored or copied in any form.

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now  http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users