Re: [Bacula-users] Why use multiple pools with disk-based backup?

2007-11-05 Thread Michael Short
I used to have different names for full backups but I decided on a
single pool for simplicity. It is easy enough to tell which volumes
are used for full and incremental backups.

I meant for my last post to go to the list as well.

Sincerely,
-Michael


On Nov 5, 2007 7:20 PM, Ben Beuchler <[EMAIL PROTECTED]> wrote:
> That makes good sense.  Do you also have a pool for inc/diff/full for
> each client?  Or just a single pool per client?
>
> Thanks!
>
> -Ben
>
>
> On 11/5/07, Michael Short <[EMAIL PROTECTED]> wrote:
> > I would suggest using a separate pool for each client, just because it
> > makes volume management easier. For example, if I wanted to delete a
> > client and all of it's data, I could just issue: rm client1-*
> >
> > Sincerely,
> > -Michael
> >
>

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Problems with the list

2007-11-05 Thread Chris Hoogendyk
I want to apologize to Kern and to the list for being mistaken and for 
pushing my mistaken point of view too far.

It turns out that greylisting can cause problems with list serves, and 
sourceforge, which hosts the bacula list, does do callbacks to verify 
email addresses. These callbacks and greylisting can get tangled up in 
one another and result in difficulties with messages going through. So 
it probably is worthwhile to whitelist sourceforge.

David Romerstein sent me the link to the sourceforge documentation on 
exactly what they do:

 
http://sourceforge.net/docman/display_doc.php?docid=6695&group_id=1#et_sender_validation

With that information, I was able to find the interaction in my mail 
logs. I must say that just one of my mail servers generates over a 
million lines of logging every day. So reading log files is a pattern 
matching exercise. The interactions for one mail message to the bacula 
list were spread out over an hour of time and separated by thousands of 
lines of other log messages. What I found is that my server contacts the 
sourceforge server indicating it has a message to deliver. Shortly, the 
sourceforge server contacts my server with a callback attempting to 
validate a null sender going back to my email address. If either the 
null sender or my email address fail, then sourceforge will fail my 
message. What happened was that my server greylisted the sourceforge 
callback. That appeared again in the logs as a greylist of my own 
message. Then, although my greylisting period was only 2 minutes, 
sourceforge took more than an hour to try again. That is likely related 
to the dynamics of how busy the sourceforge server is. Anyway, when they 
tried the callback again, it was accepted, and my message went through.

So, there is a lot of dancing back and forth there. If either server had 
some sort of misconfiguration, the interaction could fail, and the 
message would not go through. Because of this, milter-greylist has an 
option "delayedreject" that will wait to reject messages until the data 
phase (instead of the rcpt phase), just so that it doesn't get tripped 
up by callbacks. If you use milter-greylist, this option is described in 
`man greylist.conf`.

In one other instance, I saw 2 callbacks -- One to confirm the null 
sender and my address, and another to confirm that I had postmaster. I'm 
not sure why that didn't show up in the other example. I did search for it.

So, again, let me  apologize for pushing a mistaken point of view and 
offer the information in this message as a contribution to the smooth 
communication among the list participants.

And thanks to David for pointing out the information on sourceforge.



---

Chris Hoogendyk

-
   O__   Systems Administrator
  c/ /'_ --- Biology & Geology Departments
 (*) \(*) -- 140 Morrill Science Center
~~ - University of Massachusetts, Amherst 

<[EMAIL PROTECTED]>

--- 

Erdös 4



-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] FW: backing up small, dense fileset

2007-11-05 Thread Ron Cormier

Hi,
I just wanted to share an experience I had setting up a backup this weekend.

Quick overview:
The file daemon on my Windows server was overloading the storage daemon on
my Ubuntu machine.  This happened when trying to backup many small, image
files with compression turned on.  Turning off compression fixed it.

Details:
Basically I have a website on my Windows web server which contains many
small image files (several thousand) in addition to several text files.
Originally I set up the fileset to backup the root of the website but the
job would hang and eventually die with errors like the following:

Network error with FD during Backup: ERR=Connection timed out
Network send error to SD. ERR=Input/output error

I mucked around with trying different values for heartbeat interval, maximum
network buffer size, and kernel buffer sizes to no avail.

Debugging with Wireshark showed that the storage daemon communicating with
the file daemon on the windows server was advertising a tcp receive window
size of zero (TCP ZeroWindow).  The following commands yielded more
information:

netstat -p -c -n -t > stat.txt
grep bacula-sd stat.txt > sd.txt

The ZeroWindow showed itself right about when the storage daemon was trying
to backup the ~50th thumbnail.  Netstat showed that there were two ports
open by the storage daemon: one from the LAN IP address to the WAN IP
address and another from the LAN IP address to the remote server.  What was
happening is the send-q between the LAN and the WAN (i.e. the connection
between storage daemon and itself) was filling up until it was full.  When
it became full, the recv-q between the LAN and the remote server would then
fill up and the storage daemon would publish the TCP ZeroWindow.  So it
seemed the bottleneck was on the storage daemon doing its thing when it got
these small files... it couldn't keep up.
 
Finally I tried splitting the fileset to use two different include resources
within the fileset, one for the thumbnails, the other for the rest of the
website.  I turned OFF compression for the include resource that held the
thumbnails.  I'm happy to say the backup has run successfully since.

I haven't debugged many network problems so it was a lot of trial and error
for me.  I suspect that the root cause of the problem comes down to
slow/insufficient resources on my Ubuntu machine since it is virtually
hosted.

Hope this helps the next person.  Thanks to any developers/contributors for
the great software.
Ron

Ron Cormier
Communicate Solutions


-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Ignoring fileset changes

2007-11-05 Thread Michael Short
I have some experience with this on my own systems. Make sure the
fileset name has not been changed. Also make sure you physically
restart bacula rather than reload it, as I've found reload doesn't
always work the way I think it should.

Sincerely,
-Michael

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Ignoring fileset changes

2007-11-05 Thread Lucas B. Cohen
Hi,

After fine-tuning the definition of a Fileset resource (adding a directory
exclusion), I omitted to enable the 'Ignore Fileset Changes' setting.
Obviously, the corresponding next incremental backup got promoted to a full
one, which I cancelled. 

However, enabling 'Ignore Fileset Changes' the next day did not prevent the
backup job to be promoted to a full one again. Even less intuitively,
reverting the fileset definition to its original state did not stop the
backup job from being promoted. The latter remained true, whether or not
'ignore fileset changes' was enabled.

All initiated full backups were cancelled. The corresponding job resource
did not have its 'rerun failed levels' setting set to 'yes'.

Did some sort of 'fileset has changed' flag get set in a data structure
somewhere, that I could manually change to continue running incremental
backups?

Director version is Debian's 2.0.3-4.

LBC



-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Why use multiple pools with disk-based backup?

2007-11-05 Thread Ben Beuchler
I feel like I'm missing something:

If I'm doing all of my backups to disk, is there an advantage to
maintaining three distinct pools?

Thanks!

-Ben

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Media changer causing my server not to boot.

2007-11-05 Thread John Huttley

Thanks for your reply Mike,
I spent some time trying various options such as changing the bus speed 
and ID (its now ID 15) and channel.
The bios is set to disabled, however this is not used anyway. Bios is 
only used if a  bootable device is detected.
I then reset each channel to its defaults, there is a function key for 
that, but that didn't help


Still perplexed,

john

Michael Lewinger wrote:

Hi John,

The card you use has its own BIOS boot functions, you should check 
which options are relevant, and either cancel them, or cancel 
"external storage" or something of the sorts in your motherboard.


Michael

On Nov 2, 2007 5:23 AM, John Huttley <[EMAIL PROTECTED] 
> wrote:


Hi,
I've acquired a delightful old storagetek L80 with 2 DLT-8000's in it.
It's using HVD-SCSI and I'm using an adaptec 3944AUWD card in my ML150
G3 server.
Its all very well, the card detects the drive and the media changer.

Unfortunately, when the media changer is plugged in, the server won't
boot. It does a few more biosy things, then hangs.
If  I just have the drives, its fine. But the changer, by itself
or with
a drive will cause a hang.

I've worked around it by booting the server with the L80 powered off,
then later doing a scsi rescan (linux 2.6.22 kernel)

The L80 seems to work just fine, btw. Its doing a "label barcodes"
now.

I've been around since scsi was sasi and never had this issue, so yes
its terminated correctly!

Any insight greatfully received.

Regards,

john




-

This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a
browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net

https://lists.sourceforge.net/lists/listinfo/bacula-users




--
Michael Lewinger
MBR Computers
http://mbrcomp.co.il


-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users
  
-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Tandberg LTO2 (420LTO) good choice for bacula?

2007-11-05 Thread Lucas B. Cohen

> Hi all,
> 
> I have a short question? I'm using bacula 2.2.4 on a Debian etch
> machine. Now I want to extend the backup-to-disk to backup-to-tape.
> 
> Is this tape  a good (Tandberg LTO2 (420LTO) choice and known to work?
> Any hints or experience reports are welcome!

Hi Andreas,

I'm using a Tandberg Data internal half-height LTO2 drive with Bacula on a
Linux 2.6, and I'm very satisfied with it (note that I have no point of
comparison, though). I've ran the full suite of 'btape test' and 'btest
fill' tests successfully. I can also mount this drive separately while
Bacula is already running through modprobing the aic7xxx driver for my
Adaptec SCSI 160 HBA, which would be useful with the external version.

One thing I've noted is that the Linux kernel does not seem to like the mt
'retension' command with this tape drive, as mt systematically outputs an
'I/O error message' and the kernel logs a 'Current: sense key: Illegal
Request' line followed by a 'Additional sense: Invalid field in cdb' one
when I try to use this functionality. It's the only downside I've seen, as
I've read more than one document that recommends such practice.

Regards,

LBC



-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Can't run concurrent jobs to the same resource

2007-11-05 Thread Elie Azar

Sorry for the previous reply; I clicked on the wrong "Reply To" button...

So, what you're suggesting is to setup large filesystems using LVM,
something like BLV01, BLV02, etc. each with several hard drives.

Then, in bacula, setup a vchanger for each of these LVMs, and then direct
jobs to these storage devices. Like that we can have multip[le jobs
simultaneously backing up to each LVM, right? I think that would work; I'll
try it.

Thanks,
Elie



Arno Lehmann wrote:
> 
> Hi,
> 
> I suppose this was meant for the list... otherwise, I'd have to charge
> you a fee ;-)
> 
> 05.11.2007 23:38,, [EMAIL PROTECTED] wrote::
>> Hi Arno,
>> 
>> Thanks for the quick reply.
>> 
>> The intention is to have a single job per volume. What I would like
>> is for Bacula to pick up a new volume for the new job, even if it's
>> on the same storage device. I would imagine that it should be able
>> to backup multiple jobs to different volumes simultaneously.
> 
> It is, but...
> 
>> Unless of course there is an inherent limitation in Bacula that
>> prevents that; but I didn't think there was.
> 
> Indeed there is.
> 
> Bacula handles all storage devices like tape drives. And a tape drive 
> an only access one tape at any time.
> 
>> Do I need to change
>> something in my conf files to effect that? I'm not sure.
> 
> I'd suggest to set up a "virtual autochanger".
> 
> Basically, you use one set of volume files, and several storage 
> devices accessing this.
> 
> Bacula will automatically choose distinct volumes for use by different 
> devices. There are a number of mails and how-tos available in the list 
> archives - search for vchanger (I just notice you know that already...)
> 
> Alternatively, set up several file based storage daemons - most 
> commonly per client - and use these in your jobs.
> 
> This requires more configuration, but using a template you process 
> with sed or awk it's easy to create files to include in the main 
> configuration.
> 
> Hope this helps,
> 
> Arno
> 
>> Thanks, Elie
>> 
>> Arno Lehmann wrote:
>>> Hi,
>>> 
>>> 05.11.2007 23:08,, Elie Azar wrote::
 Hi,
 
 I created an LVM disk, made up of 2x500GB hard drives, and I
 made the necessary changes in the bacula conf files to be able
 to send jobs to that new storage. Here are some of the
 configuration changes.
 
 My problem is that I cannot send multiple jobs to backup
 simultaneously. the first job starts, then I get an error on
 each subsequent job. I dont' know if I'm missing something in
 my configuration, or I still need to do something to get the
 LVM disk properly installed in bacula, or something else... I'm
 not sure.
 
 I have the "Maximum Concurrent Jobs = 20" in the Storage and
 the Client directives, and I have it setup to 80 in the
 Director directive, in the bacula-dir.conf file; and also in
 the Storage directive in the bacula-sd.conf file.
 
 It seems to be requesting the same volume,
 BLVPool13-BLV01-V0003 in this case, which is being used by the
 previous job. I would expect it to be getting the next
 available volume. Is there something that I'm not seeing...
>>> Yup... see below.
>>> 
 Any help would be greatly appreciated.
 
 Thanks, Elie Azar
 
 
>>> ...
 # 13 day BLV pool definition Pool { Name = BLVPool13 Pool Type
 = Backup Recycle = yes   #
 Bacula can automatically recycle Volumes AutoPrune = yes
 # Prune expired volumes Volume Retention = 13 days   #
 13 days Maximum Volume Jobs = 1# one job per volume
 
>>> You allow only one job per Volume.
>>> 
 LabelFormat = "${Pool}-${MediaType}-V${NumVols:p/4/0/r}" }
>>> You either need another storage device, or have to allow more
>>> than one job per volume.
>>> 
 *mes 05-Nov 13:38 coal-dir JobId 13667: Start Backup JobId
 13667, Job=Linux2-Test1.2007-11-05_13.38.10 05-Nov 13:38
 coal-dir JobId 13667: There are no more Jobs associated with
 Volume "BLVPool13-BLV01-V0003". Marking it purged. 05-Nov 13:38
 coal-dir JobId 13667: All records pruned from Volume 
 "BLVPool13-BLV01-V0003"; marking it "Purged" 05-Nov 13:38
 coal-dir JobId 13667: Recycled volume "BLVPool13-BLV01-V0003" 
 05-Nov 13:38 coal-dir JobId 13667: Using Device "BLV01" 05-Nov
 13:38 coal-sd JobId 13667: Fatal error: Cannot recycle volume 
 "BLVPool13-BLV01-V0003" on device "BLV01"
 (/backups/autofs/BLV01/bacula) because it is in use by another
 job.
>>> This error message is quite clear, I think... there was only this
>>>  volume that could be used, but it's in use currently, and so
>>> can't be recycled.
>>> 
>>> Arno
>>> 
>>> -- Arno Lehmann IT-Service Lehmann www.its-lehmann.de
>> 
>> 
> 
> -- 
> Arno Lehmann
> IT-Service Lehmann
> www.its-lehmann.de
> 
> -
> This SF.net email is sponsored by: Splunk 

Re: [Bacula-users] Can't run concurrent jobs to the same resource

2007-11-05 Thread Arno Lehmann
Hi,

I suppose this was meant for the list... otherwise, I'd have to charge
you a fee ;-)

05.11.2007 23:38,, [EMAIL PROTECTED] wrote::
> Hi Arno,
> 
> Thanks for the quick reply.
> 
> The intention is to have a single job per volume. What I would like
> is for Bacula to pick up a new volume for the new job, even if it's
> on the same storage device. I would imagine that it should be able
> to backup multiple jobs to different volumes simultaneously.

It is, but...

> Unless of course there is an inherent limitation in Bacula that
> prevents that; but I didn't think there was.

Indeed there is.

Bacula handles all storage devices like tape drives. And a tape drive 
an only access one tape at any time.

> Do I need to change
> something in my conf files to effect that? I'm not sure.

I'd suggest to set up a "virtual autochanger".

Basically, you use one set of volume files, and several storage 
devices accessing this.

Bacula will automatically choose distinct volumes for use by different 
devices. There are a number of mails and how-tos available in the list 
archives - search for vchanger (I just notice you know that already...)

Alternatively, set up several file based storage daemons - most 
commonly per client - and use these in your jobs.

This requires more configuration, but using a template you process 
with sed or awk it's easy to create files to include in the main 
configuration.

Hope this helps,

Arno

> Thanks, Elie
> 
> Arno Lehmann wrote:
>> Hi,
>> 
>> 05.11.2007 23:08,, Elie Azar wrote::
>>> Hi,
>>> 
>>> I created an LVM disk, made up of 2x500GB hard drives, and I
>>> made the necessary changes in the bacula conf files to be able
>>> to send jobs to that new storage. Here are some of the
>>> configuration changes.
>>> 
>>> My problem is that I cannot send multiple jobs to backup
>>> simultaneously. the first job starts, then I get an error on
>>> each subsequent job. I dont' know if I'm missing something in
>>> my configuration, or I still need to do something to get the
>>> LVM disk properly installed in bacula, or something else... I'm
>>> not sure.
>>> 
>>> I have the "Maximum Concurrent Jobs = 20" in the Storage and
>>> the Client directives, and I have it setup to 80 in the
>>> Director directive, in the bacula-dir.conf file; and also in
>>> the Storage directive in the bacula-sd.conf file.
>>> 
>>> It seems to be requesting the same volume,
>>> BLVPool13-BLV01-V0003 in this case, which is being used by the
>>> previous job. I would expect it to be getting the next
>>> available volume. Is there something that I'm not seeing...
>> Yup... see below.
>> 
>>> Any help would be greatly appreciated.
>>> 
>>> Thanks, Elie Azar
>>> 
>>> 
>> ...
>>> # 13 day BLV pool definition Pool { Name = BLVPool13 Pool Type
>>> = Backup Recycle = yes   #
>>> Bacula can automatically recycle Volumes AutoPrune = yes
>>> # Prune expired volumes Volume Retention = 13 days   #
>>> 13 days Maximum Volume Jobs = 1# one job per volume
>>> 
>> You allow only one job per Volume.
>> 
>>> LabelFormat = "${Pool}-${MediaType}-V${NumVols:p/4/0/r}" }
>> You either need another storage device, or have to allow more
>> than one job per volume.
>> 
>>> *mes 05-Nov 13:38 coal-dir JobId 13667: Start Backup JobId
>>> 13667, Job=Linux2-Test1.2007-11-05_13.38.10 05-Nov 13:38
>>> coal-dir JobId 13667: There are no more Jobs associated with
>>> Volume "BLVPool13-BLV01-V0003". Marking it purged. 05-Nov 13:38
>>> coal-dir JobId 13667: All records pruned from Volume 
>>> "BLVPool13-BLV01-V0003"; marking it "Purged" 05-Nov 13:38
>>> coal-dir JobId 13667: Recycled volume "BLVPool13-BLV01-V0003" 
>>> 05-Nov 13:38 coal-dir JobId 13667: Using Device "BLV01" 05-Nov
>>> 13:38 coal-sd JobId 13667: Fatal error: Cannot recycle volume 
>>> "BLVPool13-BLV01-V0003" on device "BLV01"
>>> (/backups/autofs/BLV01/bacula) because it is in use by another
>>> job.
>> This error message is quite clear, I think... there was only this
>>  volume that could be used, but it's in use currently, and so
>> can't be recycled.
>> 
>> Arno
>> 
>> -- Arno Lehmann IT-Service Lehmann www.its-lehmann.de
> 
> 

-- 
Arno Lehmann
IT-Service Lehmann
www.its-lehmann.de

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Can't run concurrent jobs to the same resource

2007-11-05 Thread Arno Lehmann
Hi,

05.11.2007 23:08,, Elie Azar wrote::
> Hi,
> 
> I created an LVM disk, made up of 2x500GB hard drives, and I made the 
> necessary changes in the bacula conf files to be able to send jobs to 
> that new storage. Here are some of the configuration changes.
> 
> My problem is that I cannot send multiple jobs to backup simultaneously. 
> the first job starts, then I get an error on each subsequent job. I 
> dont' know if I'm missing something in my configuration, or I still need 
> to do something to get the LVM disk properly installed in bacula, or 
> something else... I'm not sure.
> 
> I have the "Maximum Concurrent Jobs = 20" in the Storage and the Client 
> directives, and I have it setup to 80 in the Director directive, in the 
> bacula-dir.conf file; and also in the Storage directive in the 
> bacula-sd.conf file.
> 
> It seems to be requesting the same volume, BLVPool13-BLV01-V0003 in this 
> case, which is being used by the previous job. I would expect it to be 
> getting the next available volume. Is there something that I'm not 
> seeing...

Yup... see below.

> Any help would be greatly appreciated.
> 
> Thanks,
> Elie Azar
> 
> 
...
> # 13 day BLV pool definition
> Pool {
>  Name = BLVPool13
>  Pool Type = Backup
>  Recycle = yes   # Bacula can 
> automatically recycle Volumes
>  AutoPrune = yes# Prune expired volumes
>  Volume Retention = 13 days   # 13 days
>  Maximum Volume Jobs = 1# one job per volume

You allow only one job per Volume.

>  LabelFormat = "${Pool}-${MediaType}-V${NumVols:p/4/0/r}"
> }

You either need another storage device, or have to allow more than one 
job per volume.

> *mes
> 05-Nov 13:38 coal-dir JobId 13667: Start Backup JobId 13667, 
> Job=Linux2-Test1.2007-11-05_13.38.10
> 05-Nov 13:38 coal-dir JobId 13667: There are no more Jobs associated 
> with Volume "BLVPool13-BLV01-V0003". Marking it purged.
> 05-Nov 13:38 coal-dir JobId 13667: All records pruned from Volume 
> "BLVPool13-BLV01-V0003"; marking it "Purged"
> 05-Nov 13:38 coal-dir JobId 13667: Recycled volume "BLVPool13-BLV01-V0003"
> 05-Nov 13:38 coal-dir JobId 13667: Using Device "BLV01"
> 05-Nov 13:38 coal-sd JobId 13667: Fatal error: Cannot recycle volume 
> "BLVPool13-BLV01-V0003" on device "BLV01" (/backups/autofs/BLV01/bacula) 
> because it is in use by another job.

This error message is quite clear, I think... there was only this 
volume that could be used, but it's in use currently, and so can't be 
recycled.

Arno

-- 
Arno Lehmann
IT-Service Lehmann
www.its-lehmann.de

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Media changer causing my server not to boot.

2007-11-05 Thread Michael Lewinger
Hi John,

The card you use has its own BIOS boot functions, you should check which
options are relevant, and either cancel them, or cancel "external storage"
or something of the sorts in your motherboard.

Michael

On Nov 2, 2007 5:23 AM, John Huttley <[EMAIL PROTECTED]> wrote:

> Hi,
> I've acquired a delightful old storagetek L80 with 2 DLT-8000's in it.
> It's using HVD-SCSI and I'm using an adaptec 3944AUWD card in my ML150
> G3 server.
> Its all very well, the card detects the drive and the media changer.
>
> Unfortunately, when the media changer is plugged in, the server won't
> boot. It does a few more biosy things, then hangs.
> If  I just have the drives, its fine. But the changer, by itself or with
> a drive will cause a hang.
>
> I've worked around it by booting the server with the L80 powered off,
> then later doing a scsi rescan (linux 2.6.22 kernel)
>
> The L80 seems to work just fine, btw. Its doing a "label barcodes" now.
>
> I've been around since scsi was sasi and never had this issue, so yes
> its terminated correctly!
>
> Any insight greatfully received.
>
> Regards,
>
> john
>
>
>
>
> -
> This SF.net email is sponsored by: Splunk Inc.
> Still grepping through log files to find problems?  Stop.
> Now Search log events and configuration files using AJAX and a browser.
> Download your FREE copy of Splunk now >> http://get.splunk.com/
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users
>



-- 
Michael Lewinger
MBR Computers
http://mbrcomp.co.il
-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] bacula 2.2.5: some job never finish

2007-11-05 Thread Michael Lewinger
Hi Lise,

No, your server certainly can handle this. The Process ID 267 (maybe a
thread, maybe a process in UNIX) has crashed. I'm not sure how to handle
mysql syntax to get more details regarding the particular query, but the
answer would be inside it.

Cheers,

Michael

On Nov 1, 2007 1:34 PM, lise machetel <[EMAIL PROTECTED]> wrote:

> Hi Michael
>
> When the job is like this: (status dir)
> Running Jobs:
>  JobId Level   Name   Status
> ==
>  28817 Increme  BoingBackup.2007-10-31_23.58.42 is in unknown state i
>
> and client say:
> *status client=boing-fd
> Connecting to Client boing-fd at boing.sequans.com:9102
> boing-fd Version: 2.2.5 (09 October 2007)  i686-pc-linux-gnu debian 3.1
> Daemon started 29-Oct-07 15:24, 3 Jobs run since started.
>  Heap: heap=622,592 smbytes=8,394 max_bytes=262,048 bufs=51 max_bufs=224
>  Sizeof: boffset_t=8 size_t=4 debug=1 trace=0
> Running Jobs:
> Director connected at: 01-Nov-07 12:26
> No Jobs running.
> 
> Terminated Jobs:
>  JobId  LevelFiles  Bytes   Status   FinishedName
> ==
>  28259  Incr929105.7 M  OK   17-Oct-07 06:08 BoingBackup
>  28310  Incr699120.3 M  OK   18-Oct-07 06:19 BoingBackup
>  28355  Incr 89105.7 M  OK   19-Oct-07 05:53 BoingBackup
>  28456  Incr  1,005192.5 M  OK   23-Oct-07 08:29 BoingBackup
>  28501  Incr 88110.2 M  OK   24-Oct-07 06:38 BoingBackup
>  28546  Incr  1,400127.4 M  OK   25-Oct-07 06:30 BoingBackup
>  28591  Incr232119.5 M  OK   26-Oct-07 10:43 BoingBackup
>  28673  Full182,3691.477 G  OK   29-Oct-07 16:04 BoingBackup
>  28723  Incr571209.3 M  OK   30-Oct-07 10:37 BoingBackup
>  28817  Incr135,776443.7 M  OK   01-Nov-07 08:21 BoingBackup
>
>
> show process list show:
>
> mysql> show processlist;
>
> +-++---++-+---+--+--+
> | Id  | User   | Host  | db | Command | Time  | State|
> Info
> |
>
> +-++---++-+---+--+--+
> | 234 | bacula | localhost | bacula | Sleep   |   164 |  |
> NULL
> |
> | 267 | bacula | localhost | bacula | Query   | 14398 | Sending data |
> INSERT INTO File (FileIndex, JobId, PathId, FilenameId, LStat, MD5)SELECT
> batch.FileIndex, batch.Job |
> | 268 | root   | localhost | NULL   | Query   | 0 | NULL |
> show
> processlist
> |
>
> +-++---++-+---+--+--+
> 3 rows in set (0.00 sec)
>
>
> and top:
>   PID USER  PR  NI  VIRT  RES  SHR S %CPU %MEMTIME+  COMMAND
> 15166 mysql 18   0  124m  32m 5032 S 99.9 12.6 310:57.49 mysqld
>
> maybe my server isn't powerful enough ?
>
> Lise
>
> --
>
> > Date: Tue, 30 Oct 2007 23:17:38 +0200
> > From: [EMAIL PROTECTED]
> > To: [EMAIL PROTECTED]
> > Subject: Re: [Bacula-users] bacula 2.2.5: some job never finish
> > CC: bacula-users@lists.sourceforge.net
>
> >
> > Hi Lise,
> >
> > If this is a recurrent problem, there may be a problem with the query,
> > maybe related to characters (I doubt) or accessing a problematic file
> > (maybe...). Check the mysql server with the "SHOW PROCESSLIST"
> > command, maybe the stuck query will show in the info field.
> >
> > Cheers,
> >
> > Michael
> >
> > On 10/30/07, lise machetel <[EMAIL PROTECTED]> wrote:
> > >
> > >
> > >
> > > I use bacula 2.2.5 on a debian Etch server (recent update).
> > > clients are debian Etch or sarge with same version of bacula client
> > > MySql server is used - version 5.0 (Etch stable package)
> > >
> > > I had no error during the compilation.
> > >
> > > during backup, under bconsole, if I do a "status dir", I can see that
> the status of the client is "is in unknow state"
> > > if I do a status client=name-of-this-client, the client answer that
> there is no job running, and that the job finished successfully.
> > > under a shell, I can see that MySql take 99% of the cpu
> > > and a nestat -an show:
> > > tcp 0 0 192.168.200.19:9101 192.168.200.19:43174 ESTABLISHED
> > > tcp 0 0 192.168.200.19:43174 192.168.200.19:9101 ESTABLISHED
> > > tcp 0 0 192.168.200.19:49926 192.168.200.19:9103 TIME_WAIT
> > > tcp 9 0 192.168.200.19:41945 192.168.200.19:9103 CLOSE_WAIT
> > > tcp 0 0 192.168.200.19:56060 192.168.200.215:9102 TIME_WAIT
> > > tcp 1 0 192.168.200.19:55688 192.168.200.215:9102 CLOSE_WAIT
> > >
> > > some times, the job finish, and some times the job stay like this
> (with 

[Bacula-users] Can't run concurrent jobs to the same resource

2007-11-05 Thread Elie Azar
Hi,

I created an LVM disk, made up of 2x500GB hard drives, and I made the 
necessary changes in the bacula conf files to be able to send jobs to 
that new storage. Here are some of the configuration changes.

My problem is that I cannot send multiple jobs to backup simultaneously. 
the first job starts, then I get an error on each subsequent job. I 
dont' know if I'm missing something in my configuration, or I still need 
to do something to get the LVM disk properly installed in bacula, or 
something else... I'm not sure.

I have the "Maximum Concurrent Jobs = 20" in the Storage and the Client 
directives, and I have it setup to 80 in the Director directive, in the 
bacula-dir.conf file; and also in the Storage directive in the 
bacula-sd.conf file.

It seems to be requesting the same volume, BLVPool13-BLV01-V0003 in this 
case, which is being used by the previous job. I would expect it to be 
getting the next available volume. Is there something that I'm not 
seeing...

Any help would be greatly appreciated.

Thanks,
Elie Azar


in bacula-dir.conf:
--
Storage {
 Name = BLV01C
 Address = coal.vl0.impulse.net
 SDPort = 9103
 Password = "xuCV21/ZNJbdp5joPZY0XEB10uFMf48ZnB98Lp3gri82"
 Device = BLV01
 Media Type = BLV01
 Maximum Concurrent Jobs = 20
}

JobDefs {
 Name = "BLVCRetention13"
 Type = Backup
 Level = Incremental
 Messages = Standard
 Pool = BLVPool13
 Priority = 10
 Maximum Concurrent Jobs = 20
}

# 13 day BLV pool definition
Pool {
 Name = BLVPool13
 Pool Type = Backup
 Recycle = yes   # Bacula can 
automatically recycle Volumes
 AutoPrune = yes# Prune expired volumes
 Volume Retention = 13 days   # 13 days
 Maximum Volume Jobs = 1# one job per volume
 LabelFormat = "${Pool}-${MediaType}-V${NumVols:p/4/0/r}"
}

Job {
 Name = "Coal-Test1"
 JobDefs = "BLVCRetention13"
 Storage = "BLV01CV"
 Client = coal-fd
 FileSet = "Test Set"
 Write Bootstrap = "/var/lib/bacula/Coal.bsr"
}
Job {
 Name = "Linux2-Test1"
 JobDefs = "BLVCRetention13"
 Storage = "BLV01CV"
 Client = linux2-fd
 FileSet = "Test Set"
 Write Bootstrap = "/var/lib/bacula/Linux2.bsr"
}


In bacula-sd.conf:
--

Device {
 Name = BLV01
 Archive Device = /backups/autofs/BLV01/bacula
 Media Type = BLV01
 Mount Point = /backups/autofs/BLV01
 @/etc/bacula/device-mount.inc
}



Bconsole error messages:
---

Run Backup job
JobName:  Red-FS
Level:Incremental
Client:   red-fd
FileSet:  Red Set of root usr var home tmp var-spool cyrus-amavis cert 
conf imsp quota sieve tmp
Pool: BLVPool13 (From Job resource)
Storage:  BLV01CV (From Job resource)
When: 2007-11-05 13:36:47
Priority: 10
OK to run? (yes/mod/no): yes
Job queued. JobId=13665
*
05-Nov 13:36 coal-dir JobId 13665: Start Backup JobId 13665, 
Job=Red-FS.2007-11-05_13.36.08
05-Nov 13:36 coal-dir JobId 13665: Created new Volume 
"BLVPool13-BLV01-V0003" in catalog.
05-Nov 13:36 coal-dir JobId 13665: Using Device "BLV01"
05-Nov 13:36 coal-sd JobId 13665: Labeled new Volume 
"BLVPool13-BLV01-V0003" on device "BLV01" (/backups/autofs/BLV01/bacula).
05-Nov 13:36 coal-sd JobId 13665: Wrote label to prelabeled Volume 
"BLVPool13-BLV01-V0003" on device "BLV01" (/backups/autofs/BLV01/bacula)
05-Nov 13:36 coal-dir JobId 13665: Max Volume jobs exceeded. Marking 
Volume "BLVPool13-BLV01-V0003" as Used.
red-fd:  /home is a different filesystem. Will not descend from / 
into /home
red-fd:  /usr is a different filesystem. Will not descend from / 
into /usr
red-fd:  /var is a different filesystem. Will not descend from / 
into /var
red-fd:  /cyrus is a different filesystem. Will not descend from / 
into /cyrus
*
*
Run Backup job
JobName:  Linux2-Test1
Level:Incremental
Client:   linux2-fd
FileSet:  Test Set
Pool: BLVPool13 (From Job resource)
Storage:  BLV01CV (From Job resource)
When: 2007-11-05 13:38:37
Priority: 10
OK to run? (yes/mod/no): yes
Job queued. JobId=13667
*

*mes
05-Nov 13:38 coal-dir JobId 13667: Start Backup JobId 13667, 
Job=Linux2-Test1.2007-11-05_13.38.10
05-Nov 13:38 coal-dir JobId 13667: There are no more Jobs associated 
with Volume "BLVPool13-BLV01-V0003". Marking it purged.
05-Nov 13:38 coal-dir JobId 13667: All records pruned from Volume 
"BLVPool13-BLV01-V0003"; marking it "Purged"
05-Nov 13:38 coal-dir JobId 13667: Recycled volume "BLVPool13-BLV01-V0003"
05-Nov 13:38 coal-dir JobId 13667: Using Device "BLV01"
05-Nov 13:38 coal-sd JobId 13667: Fatal error: Cannot recycle volume 
"BLVPool13-BLV01-V0003" on device "BLV01" (/backups/autofs/BLV01/bacula) 
because it is in use by another job.
05-Nov 13:38 linux1-fd: Linux2-Test1.2007-11-05_13.38.10 Fatal error: 
job.c:1752 Bad response to Append Data command. Wanted 3000 OK data, got 
3903 Error append data

05-Nov 13:38 coal-dir JobId 13667: Error: Bacula coal-dir 2.2.5 
(09Oct07): 05-Nov-2007 13:38:42
 Build OS:   i686-pc-linux-gnu gentoo

Re: [Bacula-users] How to describe a fileset for multiple client installations...

2007-11-05 Thread Michael Lewinger
Hi David,

You can/should point to a file in the machine itself:

FileSet {
Name = "Default"
Ignore Fileset Changes = yes
Include {
 File = "C:/Documents And Settings"
^^^
^^Line below - read file from client^
^^^
File = "\\ wrote:

> Hey gang,
>
> I've read through all the docs I can find on the subject but just cannot
> decipher the correct method of describing in a fileset which directories on
> which client machine should be backed up.
>
> Here's the Linux Server:/directories I want backed up where the
> connections have all been made and I successfully backed up the local files:
> {localhost:}/home/*
> RptEngine1:/usr/local/reports/A*.pdf
> DB2:/var/lib/mysql/*
> WEB1:/web/sites/*
>
> Any help would be appreciated.
>
>
> /\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
> David Gardner
> email: djgardner(at)yahoo.com
> Yahoo! IM: djgardner
> AIM: dgardner09
> "Everything is a learning experience, even a mistake."
>
>
>
>
> __
> Do You Yahoo!?
> Tired of spam?  Yahoo! Mail has the best spam protection around
> http://mail.yahoo.com
>
> -
> This SF.net email is sponsored by: Splunk Inc.
> Still grepping through log files to find problems?  Stop.
> Now Search log events and configuration files using AJAX and a browser.
> Download your FREE copy of Splunk now >> http://get.splunk.com/
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users
>



-- 
Michael Lewinger
MBR Computers
http://mbrcomp.co.il
-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Mini project

2007-11-05 Thread mark . bergman


In the message dated: Sun, 04 Nov 2007 18:09:51 +0100,
The pithy ruminations from Kern Sibbald on 
<[Bacula-users] Mini project> were:
=> Hello,
=> 
=> Although I am working on a rather large project that I would like to explain 
a 
=> bit later when I have made some real progress (probably after the first of 
=> the year), I am thinking about doing a little mini-project to add a feature 
=> that has always bothered me, and that is the fact that Bacula can at times 
=> when there are failures or bottlenecks have a lot of Duplicate jobs running.


Great idea!
  
=> So, I'm thinking about adding the following two directives, which may not 
=> provide all the functionality we will ever need, but would go a long way:
=> 
=> These apply only to backup jobs.
=> 
=> 1.  Allow Duplicate Jobs  = Yes | No | Higher   (Yes)
=> 
=> 2. Duplicate Job Interval =(0)
=> 
=> The defaults are in parenthesis and would produce the same behavior as today.
=> 
=> If Allow Duplicate Jobs is set to No, then any job starting while a job of 
the 
=> same name is running will be canceled.

Will that also apply to pending jobs? For example, if one of our large full 
backups (2+TB) is running, a few days of incrementals for other clients may be 
scheduled and pending, but not actually running. I'd be happy seeing the 
automatic cancellation of duplicates applied to pending jobs--even if no job of 
that name is running yet.

=> 
=> If Allow Duplicate Jobs is set to Higher, then any job starting with the 
same 
=> or lower level will be canceled, but any job with a Higher level will start. 
 
=> The Levels are from High to Low:  Full, Differential, Incremental

Is it possible to reword this? The description introduces several points of
possible confusion:

1. "level" sounds a lot like "priority"

2. it's inconsistent that "higher" levels take precendence over "lower"
levels, but that "lower" priorities take precendence over
"higher" (numeric) priorities

3. a fixed choice of "Higher" may not always be appropriate in
different environments. 
For example, if I've got a pending Full backup and a pending
Incremental, I might want the Incremental to take precendence,
since it will be relatively quick, and then the Full will be
automatically rescheduled (since it wasn't run) after the
Incremental completes.


What about having the syntax be:

Allow Duplicate Jobs = Yes | No | Precendence (Yes)

and adding yet-another-option:

Duplicate Precedence List = comma separated list of backup levels, in
user-defined priority order from left to right, as in
"Full, Incremental, Differential" (default = Null)

=> 
=> Finally, if you have Duplicate Job Interval set to a non-zero value, any job 
=> of the same name which starts  after a previous job of the 
=> same name would run, any one that starts within  would be 
=> subject to the above rules.  Another way of looking at it is that the Allow 
=> Duplicate Jobs directive will only apply after  of when the 
=> previous job finished (i.e. it is the minimum interval between jobs).

That would be very helpful.

Thanks,

Mark

=> 
=> Comments?
=> 
=> Best regards,
=> 
=> Kern
=> 
=> PS: I think the default for Allow Duplicate Jobs should be Higher, but that 
=> would change current behavior ...
=> 


Mark Bergman  [EMAIL PROTECTED]
System Administrator
Section of Biomedical Image Analysis 215-662-7310
Department of Radiology,   University of Pennsylvania

http://pgpkeys.pca.dfn.de:11371/pks/lookup?search=mark.bergman%40.uphs.upenn.edu




The information contained in this e-mail message is intended only for the 
personal and confidential use of the recipient(s) named above. If the reader of 
this message is not the intended recipient or an agent responsible for 
delivering it to the intended recipient, you are hereby notified that you have 
received this document in error and that any review, dissemination, 
distribution, or copying of this message is strictly prohibited. If you have 
received this communication in error, please notify us immediately by e-mail, 
and delete the original message.

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula-usersDVD+RW "overwrites"

2007-11-05 Thread Hydro Meteor
On 11/5/07, Wes Hardaker <[EMAIL PROTECTED]> wrote:
>
> > "HM" == Hydro Meteor <[EMAIL PROTECTED]> writes:
>
> HM> For those in Bacula DVD userland who are using DVD+RWs, it strikes me
> as if
> HM> 1,000 overwrites is really not all that bad.
>
> I do use DVD+RW to back up important parts of my server (and it uses a
> disk cache for anything "not on it").  I've learned a few things during
> using this process.
>
> For one thing, the default dvd-handler doesn't turn on the -dvd-compat
> flag which was causing me problems with a part not getting written and
> then later readable.  I'm not entirely sure what it does, but I turned
> it on a week or two ago and suddenly my backups are much more reliable.
>
> (I've been meaning to post here with the experience, but I was waiting
> to make sure it made a difference.  I'm positive it has at this point
> though I don't feel comfortable yet it's completely solved it)
>
> In dvd-handler line 112 add the flag to the list of default flags:
>
>   self.growparams = " -dvd-compat -A 'Bacula Data'
> -input-charset=default -i
> so-level 3 -pad " + \
> "-p 'dvd-handler / growisofs' -sysid 'BACULADATA'
> -R"


Wes, thank you for posting about this and your experiences. Once I get my
system entirely automated I will then be able to make an assessment to
determine if your suggested modification to dvd-handler makes a difference.
Then we can compare notes. Hopefully other Bacula DVD proponents can also
consider this if they are having stability problems writing to DVD+RW.

Cheers,

-H


--
> "In the bathtub of history the truth is harder to hold than the soap,
> and much more difficult to find."  -- Terry Pratchett
>
-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Pkgsrc Bacula 2.2.5 (WAS: Re: Bacula user seeks maintainer)

2007-11-05 Thread Brian A Seklecki (Mobile)
> I updated bacula to the recent 2.2.2 release, but as said I can 

Thank you very much for the 2.0x bump to 2.2

> no longer...

That's a shame -- pkgsrc is the best hope of getting Bacula onto exotic
platforms.

2.2.5 was a minor patch-level.  Has anyone tried manually bumping the
Makefile and updating distinfo?

I'll send-pr if it looks good.

~BAS


-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] bacula 2.2.5 Change FileRetention and JobRetention in Client Table?

2007-11-05 Thread Arno Lehmann
Hi,

05.11.2007 19:56,, pedro moreno wrote::
>   Hi.
> 
>   I update bacula server from 1.38.x to 2.2.5, looks everything right, i 
> test a restore procedure on each client and everything is working. But i 
> have some issue not big deal but i would like to fix this. I have 2 
> servers with FreeBSD 6.2, but in 1 of them is not recycling my volumes, 
> look this are all my volumes:
> 
> +-+--+---+-++--+--+-+--+---+---+-+
>  
> 
> | MediaId | VolumeName   | VolStatus | Enabled | VolBytes   | 
> VolFiles | VolRetention | Recycle | Slot | InChanger | MediaType | 
> LastWritten |
> +-+--+---+-++--+--+-+--+---+---+-+
>  
> 
> |   6 | OakFullFile-0004 | Used  |   1   | 
> 36,430,308,943 |8 |2,419,200   |   1 |0  
> | 0 | File  | 2007-10-15 22:02:03 |
> |  10 | OakFullFile-0006 | Used  |   1  | 31,626,348,650 
> |7 |2,419,200   |   1 |0  | 0 | 
> File  | 2007-10-01 22:02:06 |
> |  12 | OakFullFile-0007 | Used  |   1  | 36,411,310,613 
> |8 |2,419,200   |   1 |0   | 0 | 
> File  | 2007-10-08 22:01:59 |
> |  75 | OakFullFile-0008 | Used  |   1  | 35,145,188,372 
> |8 |2,419,200   |   1 |0  | 0 | 
> File  | 2007-10-30 16:47:31 |
> |  86 | OakFullFile-0009 | Used  |   1  | 34,500,825,458 
> |8 |2,419,200   |   1 |0   | 0 | 
> File  | 2007-10-30 16:11:40 |
> |  87 | OakFullFile-0010 | Used  |   1  | 
> 85,351,535   |0 |2,419,200   |   1 |0   
> | 0 | File  | 2007-10-25 13:24:07 |
> +-+--+---+-++--+--+-+--+---+---+-+
> 
>  This is one client:
> 
> Client: name=MAIL-CLI address= 192.168.1.7  
> FDport=9102 MaxJobs=1
>   JobRetention=27 days  FileRetention=27 days  AutoPrune=1
>   --> Catalog: name=MyCatalog address=*None* DBport=0 db_name=bacula
>   db_user=bacula MutliDBConn=0
> 
> This is bacula-Client data in mysql:
> +--+-+-+---+---+--+
> | ClientId | Name| 
> Uname   | 
> AutoPrune | FileRetention | JobRetention |
> +--+-+-+---+---+--+
> |1 | MAILOAK-CLI | 2.0.3  
> | 1 |   2592000 |  2678400 |
> |2 | FIREWALLOAK-CLI | 2.0.3 
>  | 1 |   2592000 |  2678400 |
> |3 | SAMBAOAK-CLI| 2.2.5
>| 1 |   2592000 |  2678400 |
> |4 | ACCOUNT-CLI  | 2.2.5 
> | 1 |   2592000 |  2678400 |
> |5 | FILTEROAK-CLI   | 2.0.3 
>  | 1 |   2592000 |  
> 2678400 |
> |6 | CATALOG-CLI | 2.2.5 
>  | 1 |   2592000 |  
> 2678400 |
> |7 | PBXOAK-CLI  | 2.2.5
>| 1 |   2592000 
> |  2678400 |
> +--+-+-+---+---+--+
> 
>   You can see that the client resource && database data is different.

The client entries in the catalog should be updated to the current 
configuration when you actually run a job on the client. They seem to 
be correct, though.

> My pools have VolumeRetention period with 28 days, i change my clients 
> File && Job retention to  27 days, restart bacula and reload the 
> bacula-dir file but the databases didn't change.
> 
> I want to recycle volumes each 28 days, now i have to manually recycle 
> my volumes.

You need to update the volumes to their current pool settings. The 
pool settings in the configuration are merely used as a template when 
creating new volumes.

You've got to update all volumes from pool in bconsole, for example, 
and everything should work as expected.

Arno

> This is my bacula-dir.conf:
> 
> Client {
>   Name = PBXOAK-CLI
>   Address = 192.168.1.5 
>   FDPort = 9102
>   Catalog = MyCatalog
>   Password = "PASSWORD"
>   File Retention = 27 d

Re: [Bacula-users] scsi problems

2007-11-05 Thread Ralf Gross
Michael Galloway schrieb:
> i'm going to just add this bit of data into the mix. dd onto and
> off the tape device:
> 
> # dd if=/dev/zero of=/dev/nst0 bs=65536 count=10
> 10+0 records in
> 10+0 records out
> 655360 bytes (6.6 GB) copied, 61.687 seconds, 106 MB/s
> # mt -f /dev/nst0 rewind
> # dd of=/dev/null if=/dev/nst0 bs=65536 count=10
> 10+0 records in
> 10+0 records out
> 655360 bytes (6.6 GB) copied, 58.3182 seconds, 112 MB/s
> # mt -f /dev/nst0 rewind
> # dd if=/dev/zero of=/dev/nst0 bs=65536 count=30
> 30+0 records in
> 30+0 records out
> 1966080 bytes (20 GB) copied, 181.502 seconds, 108 MB/s
> # mt -f /dev/nst0 rewind
> # dd of=/dev/null if=/dev/nst0 bs=65536 count=30
> 30+0 records in
> 30+0 records out
> 1966080 bytes (20 GB) copied, 215.185 seconds, 91.4 MB/s


I also get these numbers with my LTO-4 drives and dd or HP's test tool
hptapeperf. hptapeperf writes directyl from memory to tape (like dd),
no disks involved. You can also choose between 2:1 or 3:1 compression
ratio. I thought I should get higher write speed with a compression
ratio of 3:1 or 2:1, but I can't get beyond ~110 MB/s. The drive is
capable of 120 MB/s native and up to 240 MB/s with compression.

BTW: I had no problems with the btape test and a LSI SCSI HBA (no FC).

Ralf

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] bacula 2.2.5 Change FileRetention and JobRetention in Client Table?

2007-11-05 Thread pedro moreno
  Hi.

  I update bacula server from 1.38.x to 2.2.5, looks everything right, i
test a restore procedure on each client and everything is working. But i
have some issue not big deal but i would like to fix this. I have 2 servers
with FreeBSD 6.2, but in 1 of them is not recycling my volumes, look this
are all my volumes:

+-+--+---+-++--+--+-+--+---+---+-+
| MediaId | VolumeName   | VolStatus | Enabled | VolBytes   | VolFiles |
VolRetention | Recycle | Slot | InChanger | MediaType | LastWritten
|
+-+--+---+-++--+--+-+--+---+---+-+
|   6 | OakFullFile-0004 | Used  |   1   | 36,430,308,943
|8 |2,419,200   |   1 |0  | 0 | File  |
2007-10-15 22:02:03 |
|  10 | OakFullFile-0006 | Used  |   1  | 31,626,348,650
|7 |2,419,200   |   1 |0  | 0 | File  |
2007-10-01 22:02:06 |
|  12 | OakFullFile-0007 | Used  |   1  | 36,411,310,613
|8 |2,419,200   |   1 |0   | 0 | File  |
2007-10-08 22:01:59 |
|  75 | OakFullFile-0008 | Used  |   1  | 35,145,188,372
|8 |2,419,200   |   1 |0  | 0 | File  |
2007-10-30 16:47:31 |
|  86 | OakFullFile-0009 | Used  |   1  | 34,500,825,458
|8 |2,419,200   |   1 |0   | 0 | File  |
2007-10-30 16:11:40 |
|  87 | OakFullFile-0010 | Used  |   1  | 85,351,535
|0 |2,419,200   |   1 |0   | 0 | File  |
2007-10-25 13:24:07 |
+-+--+---+-++--+--+-+--+---+---+-+

 This is one client:

Client: name=MAIL-CLI address=192.168.1.7 FDport=9102 MaxJobs=1
  JobRetention=27 days  FileRetention=27 days  AutoPrune=1
  --> Catalog: name=MyCatalog address=*None* DBport=0 db_name=bacula
  db_user=bacula MutliDBConn=0

This is bacula-Client data in mysql:
+--+-+-+---+---+--+
| ClientId | Name|
Uname   | AutoPrune
| FileRetention | JobRetention |
+--+-+-+---+---+--+
|1 | MAILOAK-CLI | 2.0.3
| 1 |   2592000 |  2678400 |
|2 | FIREWALLOAK-CLI | 2.0.3
 | 1 |   2592000 |  2678400 |
|3 | SAMBAOAK-CLI| 2.2.5
   | 1 |   2592000 |  2678400 |
|4 | ACCOUNT-CLI  | 2.2.5
| 1 |   2592000 |  2678400 |
|5 | FILTEROAK-CLI   | 2.0.3
 | 1 |   2592000 |  2678400
|
|6 | CATALOG-CLI | 2.2.5
 | 1 |   2592000 |  2678400
|
|7 | PBXOAK-CLI  | 2.2.5
   | 1 |   2592000 |
2678400 |
+--+-+-+---+---+--+

  You can see that the client resource && database data is different.

My pools have VolumeRetention period with 28 days, i change my clients File
&& Job retention to  27 days, restart bacula and reload the bacula-dir file
but the databases didn't change.

I want to recycle volumes each 28 days, now i have to manually recycle my
volumes.

This is my bacula-dir.conf:

Client {
  Name = PBXOAK-CLI
  Address = 192.168.1.5
  FDPort = 9102
  Catalog = MyCatalog
  Password = "PASSWORD"
  File Retention = 27 days
  Job Retention = 27 days
  Autoprune = yes
}

My oldest volume date is: 2007-10-01 22:02:06 now the date is 2007-11-05:
almost 35 days, and look:
Scheduled Jobs:
Level  Type Pri  Scheduled  Name   Volume
Full   Backup11  05-Nov-07 17:00FIREWALL*unknown*
Full   Backup12  05-Nov-07 17:20SAMBA   *unknown*
Full   Backup13  05-Nov-07 18:30aacount
*unknown*
Full   Backup16  05-Nov-07 20:00pbxOAK *unknown*
Full   Backup10  05-Nov-07 20:10MAIL*unknown*
Full   Backup14  05-Nov-07 21:10FILTER  *unknown*
Full   Backup15  05-Nov-07 22:00BackupCatalog  *unknown*

Some could advised me how to fix this, i want bacula recycle my volumes
automatically, anything about i will appreciated, t

[Bacula-users] RD1000

2007-11-05 Thread Naira Kaieski
Hi all,

I need make backup of 130GB. And for that I need a device with high 
capacity. I am considering using the RD1000. Someone already using 
Bacula with RD1000?

-- 
Thanks,
-
- Naira


-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Mini project

2007-11-05 Thread Kern Sibbald
On Monday 05 November 2007 17:25, [EMAIL PROTECTED] wrote:
> In the message dated: Sun, 04 Nov 2007 18:09:51 +0100,
> The pithy ruminations from Kern Sibbald on
> <[Bacula-users] Mini project> were:
> => Hello,
> =>
> => Although I am working on a rather large project that I would like to
> explain a => bit later when I have made some real progress (probably after
> the first of => the year), I am thinking about doing a little mini-project
> to add a feature => that has always bothered me, and that is the fact that
> Bacula can at times => when there are failures or bottlenecks have a lot of
> Duplicate jobs running.
>
>
> Great idea!
>
> => So, I'm thinking about adding the following two directives, which may
> not => provide all the functionality we will ever need, but would go a long
> way: =>
> => These apply only to backup jobs.
> =>
> => 1.  Allow Duplicate Jobs  = Yes | No | Higher   (Yes)
> =>
> => 2. Duplicate Job Interval =(0)
> =>
> => The defaults are in parenthesis and would produce the same behavior as
> today. =>
> => If Allow Duplicate Jobs is set to No, then any job starting while a job
> of the => same name is running will be canceled.
>
> Will that also apply to pending jobs? For example, if one of our large full
> backups (2+TB) is running, a few days of incrementals for other clients may
> be scheduled and pending, but not actually running. I'd be happy seeing the
> automatic cancellation of duplicates applied to pending jobs--even if no
> job of that name is running yet.

Well, if you have this enabled, they will never be scheduled, and there will 
be no need to cancel, them as it will be done automatically.

>
> =>
> => If Allow Duplicate Jobs is set to Higher, then any job starting with the
> same => or lower level will be canceled, but any job with a Higher level
> will start. => The Levels are from High to Low:  Full, Differential,
> Incremental
>
> Is it possible to reword this? The description introduces several points of
> possible confusion:
>
>   1. "level" sounds a lot like "priority"
>
>   2. it's inconsistent that "higher" levels take precendence over "lower"
>   levels, but that "lower" priorities take precendence over
>   "higher" (numeric) priorities
>
>   3. a fixed choice of "Higher" may not always be appropriate in
>   different environments.
>   For example, if I've got a pending Full backup and a pending
>   Incremental, I might want the Incremental to take precendence,
>   since it will be relatively quick, and then the Full will be
>   automatically rescheduled (since it wasn't run) after the
>   Incremental completes.
>
>
> What about having the syntax be:
>
>   Allow Duplicate Jobs = Yes | No | Precendence (Yes)

My last propose used the word HigherLevel, which is much clearer and which I 
prefer over Precedence.  The exact names are still open to a certain 
discussion though.

>
> and adding yet-another-option:
>
>   Duplicate Precedence List = comma separated list of backup levels, in
>   user-defined priority order from left to right, as in
>   "Full, Incremental, Differential" (default = Null)

I'm not too enthusiastic about that suggestion unless you can give me a good 
reason why you need to re-define the levels.  The current level structure 
from high to low is Full, Differential, and Incremental.  I don't see a need 
to change that.

>
> =>
> => Finally, if you have Duplicate Job Interval set to a non-zero value, any
> job => of the same name which starts  after a previous job
> of the => same name would run, any one that starts within 
> would be => subject to the above rules.  Another way of looking at it is
> that the Allow => Duplicate Jobs directive will only apply after
>  of when the => previous job finished (i.e. it is the
> minimum interval between jobs).
>
> That would be very helpful.

Yes, I think this simple mechanism will eliminate a lot of the problems 
associated with bottlenecks, unmounted tapes, long running jobs, 
vacations, ...

I have another one I want to do too, which is to set a minimum period in which 
a Differential or Full is done, and automatically upgrade if one was missed.

Regards,

Kern

>
> Thanks,
>
> Mark
>
> =>
> => Comments?
> =>
> => Best regards,
> =>
> => Kern
> =>
> => PS: I think the default for Allow Duplicate Jobs should be Higher, but
> that => would change current behavior ...
> =>
>
> 
> Mark Bergman  [EMAIL PROTECTED]
> System Administrator
> Section of Biomedical Image Analysis 215-662-7310
> Department of Radiology,   University of Pennsylvania
>
> http://pgpkeys.pca.dfn.de:11371/pks/lookup?search=mark.bergman%40.uphs.upen
>n.edu
>
>
>
>
> The information contained in this e-mail message is intended only for the
> personal and confidential use of the recipient(s) named above. If the
> reader of this message is not the i

Re: [Bacula-users] SAN Autoloader in a Fibre Channel

2007-11-05 Thread Jeronimo Zucco
Answer to my self :

Jeronimo Zucco escreveu:
> Hi, list.
>
> Somebody have used bacula with a SAN Fibre Channel autoloader ?
>   
Yes

> How is it works ?
>   
http://sourceforge.net/mailarchive/message.php?msg_id=Pine.LNX.4.64.0710261639240.9145%40mssllu

> Is it supported by bacula ? What model of autoloader are you using ?
>
> Clients plugged in a SAN sent directly to autoloader, or sent first 
> to bacula server plugged in a SAN and then sent to autoloader?
>   
http://sourceforge.net/mailarchive/message.php?msg_name=462DE045.6010408%40its-lehmann.de



Sorry I didn't use the list archive before.

-- 
Jeronimo Zucco
LPIC-1 Linux Professional Institute Certified
Núcleo de Processamento de Dados
Universidade de Caxias do Sul

http://jczucco.blogspot.com


-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Advice

2007-11-05 Thread Dep, Khushil (GE Money)
Thanks Michael & John. Understand this fully now.  

-Original Message-
From: John Drescher [mailto:[EMAIL PROTECTED] 
Sent: 05 November 2007 16:09
To: Dep, Khushil (GE Money)
Cc: bacula-users@lists.sourceforge.net
Subject: Re: [Bacula-users] Advice

On 11/5/07, Dep, Khushil (GE Money) <[EMAIL PROTECTED]> wrote:
> Is there anyway to auto recycle/delete or is this a manual process?
>
It is automatic and very flexable. See here:

http://www.bacula.org/dev-manual/Automatic_Volume_Recycling.html#SECTION
00251

John

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] scsi problems

2007-11-05 Thread Arno Lehmann
Hi,

05.11.2007 15:40,, Michael Galloway wrote::
> i'm going to just add this bit of data into the mix. dd onto and
> off the tape device:
> 
> # dd if=/dev/zero of=/dev/nst0 bs=65536 count=10
> 10+0 records in
> 10+0 records out
> 655360 bytes (6.6 GB) copied, 61.687 seconds, 106 MB/s

Is this with hardware compression turned on? If it is, it's not an 
interesting result, I fear - the nulls will be compressed to nearly 
nothing. And the effective throughput quite low :-)

> # mt -f /dev/nst0 rewind
> # dd of=/dev/null if=/dev/nst0 bs=65536 count=10
> 10+0 records in
> 10+0 records out
> 655360 bytes (6.6 GB) copied, 58.3182 seconds, 112 MB/s
> # mt -f /dev/nst0 rewind
> # dd if=/dev/zero of=/dev/nst0 bs=65536 count=30
> 30+0 records in
> 30+0 records out
> 1966080 bytes (20 GB) copied, 181.502 seconds, 108 MB/s
> # mt -f /dev/nst0 rewind
> # dd of=/dev/null if=/dev/nst0 bs=65536 count=30
> 30+0 records in
> 30+0 records out
> 1966080 bytes (20 GB) copied, 215.185 seconds, 91.4 MB/s
> 
> so at least it would seem that the drivers/adapter can move data
> at an acceptable rate. 

Looks like it, yes.

> i've done an strace of btape test and it hangs when the scsi bus
> hangs, and would be glad to make the file available to to anyone
> that thinks they can help.

Kernel module developers might be better for this stuff... SCSI bus 
hangs are, in my experience, most of the time caused by hardware 
issues or driver problems, not directly by the application software. 
I'd even say that, if the application software can, by writing to the 
SCSI bus, hang that, it's a driver bug.

Arno

> -- michael
> 
> 
> 
> On Fri, Nov 02, 2007 at 09:15:08PM +0100, Arno Lehmann wrote:
>> Hi,
>>
>> 02.11.2007 17:55,, Michael Galloway wrote::
>>> ok, i'd like to revisit this issue. i changed scsi cards and i still
>>> get scsi crashes from btape test command. new card is adaptec 29320:
>> Bad.
>>
>>> 03:06.0 SCSI storage controller: Adaptec ASC-29320A U320 (rev 10)
>>>
>>> i spent the morning tar onto and off the LTO-4 drive:
>>>
>>> [5:0:15:0]   tapeIBM  ULTRIUM-TD4  7950  /dev/st0
>>>
>>> with no issues. then i erased the tape and started with btape again:
>>>
>>> ./btape -c bacula-sd.conf /dev/nst0
>>> Tape block granularity is 1024 bytes.
>>> btape: butil.c:285 Using device: "/dev/nst0" for writing.
>>> btape: btape.c:368 open device "LTO4" (/dev/nst0): OK
>>> *test
>>>
>>> === Write, rewind, and re-read test ===
>>>
>>> I'm going to write 1000 records and an EOF
>>> then write 1000 records and an EOF, then rewind,
>>> and re-read the data to verify that it is correct.
>>>
>>> This is an *essential* feature ...
>>>
>>> btape: btape.c:827 Wrote 1000 blocks of 64412 bytes.
>>> btape: btape.c:501 Wrote 1 EOF to "LTO4" (/dev/nst0)
>>> btape: btape.c:843 Wrote 1000 blocks of 64412 bytes.
>>> btape: btape.c:501 Wrote 1 EOF to "LTO4" (/dev/nst0)
>>> btape: btape.c:852 Rewind OK.
>>> 1000 blocks re-read correctly.
>>>
>>> hangs there with this in the dmesg log:
>>>
>>> dmesg
>> ...snipped. I can't understand that stuff easily.
>> ...
>>> so, in the end i cannot make a successful btape test run with this LTO-4 
>>> drive
>>> with two different scsi cards. i guess my question is, is this a bacula 
>>> btape 
>>> issue or an LTO or spectralogic scsi issue?
>> I'm really not sure... I know that btape works correctly; at least it 
>> did so everytime I used it.
>>
>> LTO tapes, too, can be used without problems by Bacula, and btape 
>> testing them works for me and many others, too.
>>
>> Finally, I'm quite sure that Bacula is run on Spektralogic hardware 
>> somewhere out there.
>>
>> Currently, I can only recommend to ensure you've got the latest 
>> firmware for HBA and tape device, a proven driver and kernel on your 
>> system, and run a current version of btape.
>>
>> If the problem persists (which I assume) you should file a bug report 
>> at bugs.bacula.org and perhaps also contact the developers of the SCSI 
>> driver you're running.
>>
>> Apart from that I can only wish good luck...
>>
>> Arno
>>
>> -- 
>> Arno Lehmann
>> IT-Service Lehmann
>> www.its-lehmann.de
>>
>> -
>> This SF.net email is sponsored by: Splunk Inc.
>> Still grepping through log files to find problems?  Stop.
>> Now Search log events and configuration files using AJAX and a browser.
>> Download your FREE copy of Splunk now >> http://get.splunk.com/
>> ___
>> Bacula-users mailing list
>> Bacula-users@lists.sourceforge.net
>> https://lists.sourceforge.net/lists/listinfo/bacula-users
>>
> 

-- 
Arno Lehmann
IT-Service Lehmann
www.its-lehmann.de

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX 

Re: [Bacula-users] Advice

2007-11-05 Thread John Drescher
On 11/5/07, Dep, Khushil (GE Money) <[EMAIL PROTECTED]> wrote:
> Is there anyway to auto recycle/delete or is this a manual process?
>
It is automatic and very flexable. See here:

http://www.bacula.org/dev-manual/Automatic_Volume_Recycling.html#SECTION00251

John

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Advice

2007-11-05 Thread Dep, Khushil (GE Money)
Is there anyway to auto recycle/delete or is this a manual process?  

-Original Message-
From: Michael Short [mailto:[EMAIL PROTECTED] 
Sent: 05 November 2007 15:19
To: Dep, Khushil (GE Money); bacula-users@lists.sourceforge.net
Subject: Re: [Bacula-users] Advice

On Nov 5, 2007 9:15 AM, Dep, Khushil (GE Money) <[EMAIL PROTECTED]>
wrote:
> Thanks for the reply Michael. I take it that once pruned those volumes

> are released for use by other other jobs?

After the volumes are pruned they are no longer referenced by the
catalog. To have bacula reuse them make sure the volumes are recycled or
deleted.

Sincerely,
-Michael

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula doesn't see tape in drive after label barcodes

2007-11-05 Thread Arno Lehmann
Hi,

05.11.2007 13:53,, Shon Stephens wrote::
> I have an autochanger with 12 tapes. I issue the label barcodes command 
> and it cycles through the tapes in the changer and then does the following:
> 
> 3307 Issuing autochanger "unload slot 11, drive 0" command.
> 3304 Issuing autochanger "load slot 12, drive 0" command.
> 3305 Autochanger "load slot 12, drive 0", status is OK.
> 3301 Issuing autochanger "loaded? drive 0" command.
> 3302 Autochanger "loaded? drive 0", result: nothing loaded.
> 3000 OK label. VolBytes=64512 DVD=0 Volume="A00012" Device="Ultrium-TD3" 
> (/dev/rmt/0cbn)
> Catalog record for Volume "A00012", Slot 12  successfully created.
> 
> *release
> The defined Storage resources are:
>  1: Mentora_Files
>  2: Exabyte_224
> Select Storage resource (1-2): 2
> 3301 Issuing autochanger "loaded? drive 0" command.
> 3302 Autochanger "loaded? drive 0", result: nothing loaded.
> 3921 Device "Ultrium-TD3" (/dev/rmt/0cbn) already released.
> 
> This is where Bacula then "loses" the 12th tape. I can't get Bacula to 
> even recognize that its in the drive and needs to be unloaded. Does 
> anyone know what the issue might be?

Unfortunately, I don't have the solution to this. I face the same 
issue, and I can not always reproduce it, which makes it a bit hard to 
analyze (available time is an issue here, too... :-)

My workaround is to unmount the drive from Bacula, use mtx to unload 
the tape and load the tape requested, and mount from Bacula again.

Until now, that has worked every time.

As it is, I'm not even sure this is an issue purely inside Bacula, 
because a) the autochanger or drive seem to need the extra kick of 
unloading / reloading a tape, and b) not all the installations I 
support seem to show this behavior. It might be OS or hardware dependent.

For me, the workaround is ok for a while because where it happens 
tapes are not always needed to be changed. Also, the issue *seems* to 
happen only when a tape from a different pool than the one currently 
loaded is needed (which points to Bacula, not the environment).

All this is only an impression, not the result of careful observation, 
by the way.

Arno

> -Shon
> 
> 
> 
> 
> -
> This SF.net email is sponsored by: Splunk Inc.
> Still grepping through log files to find problems?  Stop.
> Now Search log events and configuration files using AJAX and a browser.
> Download your FREE copy of Splunk now >> http://get.splunk.com/
> 
> 
> 
> 
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users

-- 
Arno Lehmann
IT-Service Lehmann
www.its-lehmann.de

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Advice

2007-11-05 Thread Michael Short
On Nov 5, 2007 9:15 AM, Dep, Khushil (GE Money) <[EMAIL PROTECTED]> wrote:
> Thanks for the reply Michael. I take it that once pruned those volumes
> are released for use by other other jobs?

After the volumes are pruned they are no longer referenced by the
catalog. To have bacula reuse them make sure the volumes are recycled
or deleted.

Sincerely,
-Michael

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula-usersDVD+RW "overwrites"

2007-11-05 Thread Wes Hardaker
> "HM" == Hydro Meteor <[EMAIL PROTECTED]> writes:

HM> For those in Bacula DVD userland who are using DVD+RWs, it strikes me as if
HM> 1,000 overwrites is really not all that bad.

I do use DVD+RW to back up important parts of my server (and it uses a
disk cache for anything "not on it").  I've learned a few things during
using this process.

For one thing, the default dvd-handler doesn't turn on the -dvd-compat
flag which was causing me problems with a part not getting written and
then later readable.  I'm not entirely sure what it does, but I turned
it on a week or two ago and suddenly my backups are much more reliable.

(I've been meaning to post here with the experience, but I was waiting
to make sure it made a difference.  I'm positive it has at this point
though I don't feel comfortable yet it's completely solved it)

In dvd-handler line 112 add the flag to the list of default flags:

  self.growparams = " -dvd-compat -A 'Bacula Data' -input-charset=default -i
so-level 3 -pad " + \
"-p 'dvd-handler / growisofs' -sysid 'BACULADATA' -R"

-- 
"In the bathtub of history the truth is harder to hold than the soap,
 and much more difficult to find."  -- Terry Pratchett

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Advice

2007-11-05 Thread Dep, Khushil (GE Money)
Thanks for the reply Michael. I take it that once pruned those volumes
are released for use by other other jobs? 

-Original Message-
From: Michael Short [mailto:[EMAIL PROTECTED] 
Sent: 05 November 2007 15:13
To: Dep, Khushil (GE Money); bacula-users@lists.sourceforge.net
Subject: Re: [Bacula-users] Advice

This is a limitation of bacula, in order to free up space you must prune
your volumes. The volume retention setting sets how long the volumes
will live in the database, but it is up to you to delete them.
If you need more space you can just prune them manually from the
console.

Pruning files will only erase them from the database, you must delete
the volume to remove the file from your storage space.

Hopefully at some point in the future there will be a way to expunge
files from volumes by internal differentials.

Sincerely,
-Michael

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Advice

2007-11-05 Thread Michael Short
This is a limitation of bacula, in order to free up space you must
prune your volumes. The volume retention setting sets how long the
volumes will live in the database, but it is up to you to delete them.
If you need more space you can just prune them manually from the
console.

Pruning files will only erase them from the database, you must delete
the volume to remove the file from your storage space.

Hopefully at some point in the future there will be a way to expunge
files from volumes by internal differentials.

Sincerely,
-Michael

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] scsi problems

2007-11-05 Thread Michael Galloway
i'm going to just add this bit of data into the mix. dd onto and
off the tape device:

# dd if=/dev/zero of=/dev/nst0 bs=65536 count=10
10+0 records in
10+0 records out
655360 bytes (6.6 GB) copied, 61.687 seconds, 106 MB/s
# mt -f /dev/nst0 rewind
# dd of=/dev/null if=/dev/nst0 bs=65536 count=10
10+0 records in
10+0 records out
655360 bytes (6.6 GB) copied, 58.3182 seconds, 112 MB/s
# mt -f /dev/nst0 rewind
# dd if=/dev/zero of=/dev/nst0 bs=65536 count=30
30+0 records in
30+0 records out
1966080 bytes (20 GB) copied, 181.502 seconds, 108 MB/s
# mt -f /dev/nst0 rewind
# dd of=/dev/null if=/dev/nst0 bs=65536 count=30
30+0 records in
30+0 records out
1966080 bytes (20 GB) copied, 215.185 seconds, 91.4 MB/s

so at least it would seem that the drivers/adapter can move data
at an acceptable rate. 

i've done an strace of btape test and it hangs when the scsi bus
hangs, and would be glad to make the file available to to anyone
that thinks they can help.

-- michael



On Fri, Nov 02, 2007 at 09:15:08PM +0100, Arno Lehmann wrote:
> Hi,
> 
> 02.11.2007 17:55,, Michael Galloway wrote::
> > ok, i'd like to revisit this issue. i changed scsi cards and i still
> > get scsi crashes from btape test command. new card is adaptec 29320:
> 
> Bad.
> 
> > 03:06.0 SCSI storage controller: Adaptec ASC-29320A U320 (rev 10)
> > 
> > i spent the morning tar onto and off the LTO-4 drive:
> > 
> > [5:0:15:0]   tapeIBM  ULTRIUM-TD4  7950  /dev/st0
> > 
> > with no issues. then i erased the tape and started with btape again:
> > 
> > ./btape -c bacula-sd.conf /dev/nst0
> > Tape block granularity is 1024 bytes.
> > btape: butil.c:285 Using device: "/dev/nst0" for writing.
> > btape: btape.c:368 open device "LTO4" (/dev/nst0): OK
> > *test
> > 
> > === Write, rewind, and re-read test ===
> > 
> > I'm going to write 1000 records and an EOF
> > then write 1000 records and an EOF, then rewind,
> > and re-read the data to verify that it is correct.
> > 
> > This is an *essential* feature ...
> > 
> > btape: btape.c:827 Wrote 1000 blocks of 64412 bytes.
> > btape: btape.c:501 Wrote 1 EOF to "LTO4" (/dev/nst0)
> > btape: btape.c:843 Wrote 1000 blocks of 64412 bytes.
> > btape: btape.c:501 Wrote 1 EOF to "LTO4" (/dev/nst0)
> > btape: btape.c:852 Rewind OK.
> > 1000 blocks re-read correctly.
> > 
> > hangs there with this in the dmesg log:
> > 
> > dmesg
> ...snipped. I can't understand that stuff easily.
> ...
> > so, in the end i cannot make a successful btape test run with this LTO-4 
> > drive
> > with two different scsi cards. i guess my question is, is this a bacula 
> > btape 
> > issue or an LTO or spectralogic scsi issue?
> 
> I'm really not sure... I know that btape works correctly; at least it 
> did so everytime I used it.
> 
> LTO tapes, too, can be used without problems by Bacula, and btape 
> testing them works for me and many others, too.
> 
> Finally, I'm quite sure that Bacula is run on Spektralogic hardware 
> somewhere out there.
> 
> Currently, I can only recommend to ensure you've got the latest 
> firmware for HBA and tape device, a proven driver and kernel on your 
> system, and run a current version of btape.
> 
> If the problem persists (which I assume) you should file a bug report 
> at bugs.bacula.org and perhaps also contact the developers of the SCSI 
> driver you're running.
> 
> Apart from that I can only wish good luck...
> 
> Arno
> 
> -- 
> Arno Lehmann
> IT-Service Lehmann
> www.its-lehmann.de
> 
> -
> This SF.net email is sponsored by: Splunk Inc.
> Still grepping through log files to find problems?  Stop.
> Now Search log events and configuration files using AJAX and a browser.
> Download your FREE copy of Splunk now >> http://get.splunk.com/
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users
> 

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Bacula doesn't see tape in drive after label barcodes

2007-11-05 Thread Shon Stephens
I have an autochanger with 12 tapes. I issue the label barcodes command and
it cycles through the tapes in the changer and then does the following:

3307 Issuing autochanger "unload slot 11, drive 0" command.
3304 Issuing autochanger "load slot 12, drive 0" command.
3305 Autochanger "load slot 12, drive 0", status is OK.
3301 Issuing autochanger "loaded? drive 0" command.
3302 Autochanger "loaded? drive 0", result: nothing loaded.
3000 OK label. VolBytes=64512 DVD=0 Volume="A00012" Device="Ultrium-TD3"
(/dev/rmt/0cbn)
Catalog record for Volume "A00012", Slot 12  successfully created.

*release
The defined Storage resources are:
 1: Mentora_Files
 2: Exabyte_224
Select Storage resource (1-2): 2
3301 Issuing autochanger "loaded? drive 0" command.
3302 Autochanger "loaded? drive 0", result: nothing loaded.
3921 Device "Ultrium-TD3" (/dev/rmt/0cbn) already released.

This is where Bacula then "loses" the 12th tape. I can't get Bacula to even
recognize that its in the drive and needs to be unloaded. Does anyone know
what the issue might be?

-Shon
-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Advice

2007-11-05 Thread Dep, Khushil (GE Money)
Hey All,
 
So here's my setup -
 
I have DTD backup setup so that all backups goto a SAN partition which
is 1TB in size. Last night it ran out of space for further backups!
Having looked at the files on the SAN and th jobs in the Bacula DB, my
question is this:
 
When and under what condition will Bacula delete the files from jobs
that have been deleted from the DB or have expired?
 
-Khush.
-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Mini project

2007-11-05 Thread Alan Brown
On Sun, 4 Nov 2007, Kern Sibbald wrote:

> Hello,
>
> Although I am working on a rather large project that I would like to explain a
> bit later when I have made some real progress (probably after the first of
> the year), I am thinking about doing a little mini-project to add a feature
> that has always bothered me, and that is the fact that Bacula can at times
> when there are failures or bottlenecks have a lot of Duplicate jobs running.
> So, I'm thinking about adding the following two directives, which may not
> provide all the functionality we will ever need, but would go a long way:
>
> These apply only to backup jobs.
>
> 1.  Allow Duplicate Jobs  = Yes | No | Higher   (Yes)

This can currently (partly) be achieved by defining "max concurrent jobs = 
1" in the Job specification.

> 2. Duplicate Job Interval =(0)


My biggest problem with jobs has come up a few times and revolves 
around rerun failed levels.

If a full/differential backup takes longer than 24 hours to run, the next 
incremental first decides that the previous job has failed, so upgrades 
itself to full, THEN realises that only 1 concurrent job is allowed, so 
queues itself.

This order is wrong. Such checks should only be made at the time the job 
actually starts running, not at the time it's queued.

>
> The defaults are in parenthesis and would produce the same behavior as today.
>
> If Allow Duplicate Jobs is set to No, then any job starting while a job of the
> same name is running will be canceled.
>
> If Allow Duplicate Jobs is set to Higher, then any job starting with the same
> or lower level will be canceled, but any job with a Higher level will start.
> The Levels are from High to Low:  Full, Differential, Incremental

This would solve the problem I (and others) are seeing.


-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] python support removal

2007-11-05 Thread Arno Lehmann
Hi,

I'm sending this to the list - I suppose you wanted it to go there, too.

05.11.2007 10:58,, Rich wrote::
> On 2007.11.04. 23:02, Arno Lehmann wrote:
> ...
>> True. But chances might be better to find such a person if python was 
>> left in... if we had it in Bacula (as long as possible - Bacula core 
>> development might some day get incompatible with how python is 
>> embedded today - ) that might be a starting point for someone who 
>> knows python's inside well enough.
>>
>> Anyone of you active on the python developers mailing list? There 
>> might be someone around who would support pythons embedding in other 
>> hardware looking for a nice project (with not too many support issues 
>> currently...)
> 
> so i'm a bit puzzled here now.
> python vs labelformat - which will get booted ? :)

I don't know... there seem to be problems with either of those.

LabelFormat is hard to understand and inflexible (in comparison with a 
python script).

Python requires more maintenance.

What we'll end up with I have no idea. Kern knows, perhaps, so I cc'ed 
him.

> i'd really like to get this working on the test system, where i can 
> freely debug python, if needed.
 >
> i could roll it out with labelformat, but if that breaks in a year (and 
> python is kept), i'll have a bigger problem.

I guess that a final decision will not be made before version 3 is in 
the pipeline.

As far as I know, Kern is currently fixing things in the 2.2 versions, 
and thinking about major projects to put into version 3.

If, during the initial coding of version 3, problems with LabelFormat 
or python develop, a decision will be made. I'm not in a position to 
foresee what will happen, or even which option - LabelFormat or python 
- is more likely to be dropped, though.

Arno

-- 
Arno Lehmann
IT-Service Lehmann
www.its-lehmann.de

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Exampels for non-rewritable DVD?

2007-11-05 Thread Hydro Meteor
Hello all --

For those who are backing up to non-rewritable DVD optical media (such as
DVD-R and DVD+R), would anyone be willing to share examples of the Device
Resource example for such? The reason I ask is that while I have a Device
Resource for DVD+RW working fine, I have been unsuccessful thus far getting
Bacula to back up to a DVD+R disc (although I ran the growisofs and
dvd-handler script on a blank DVD+R disc dropped into my machine, and I was
successful in getting Bacula to write a label (as a "part 1" file) on to
DVD+R but nothing further). I also discovered that DVD+R discs should not be
de-iced before writing to them with Bacula via dvd-handler (well, at least
not manually as part of a separate process like DVD+RW discs can be). I was
hoping that the same Storage Daemon Device Resource used for DVD+RW could be
used for DVD+R discs but no such luck (in fact the directives such as
MaximumPartSize and  WritePartCommand do not seem to make sense in the
context of a non-rewitable DVD optical disc since apparently DVD+R (or
DVD-R) are not multi-session capable, so how does one best use
non-rewritable optical media with Bacula? Is the best approach to in fact
use Volume Migration (write the Volume to hard drive first and then move it
in one fell swoop over to DVD+R or DVD-R)?

Thank you,

-H
-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] SAN Autoloader in a Fibre Channel

2007-11-05 Thread Jeronimo Zucco
Hi, list.

Somebody have used bacula with a SAN Fibre Channel autoloader ?

How is it works ?

Is it supported by bacula ? What model of autoloader are you using ?

Clients plugged in a SAN sent directly to autoloader, or sent first 
to bacula server plugged in a SAN and then sent to autoloader?

I have many questions about this issue, I will be glad if someone 
can help me.

Thank you.

-- 
Jeronimo Zucco
LPIC-1 Linux Professional Institute Certified
Núcleo de Processamento de Dados
Universidade de Caxias do Sul

http://jczucco.blogspot.com


-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] python support removal

2007-11-05 Thread Rich
you probably wanted to send this to list ;)

On 2007.11.05. 13:01, Marek Simon wrote:
> I think some scripting support is good for various hacking and unusual 
> solutions, but I think python is kind of "exotical" language. Very few 
> users know it, very few projects is written in it and on universities 
> it's tought very rarely. May be some other language would be appriciate.

it's not that exotic, actually. you can even write openoffice.org macros 
in it :)

as for the language itself, i can't comment. i don't code, and if i can 
configure it for my liking once, that's all i want. oh, and i also want 
it not to break on upgrades =)

> MArek
...
-- 
  Rich

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] backing up open files on VISTA using VSS

2007-11-05 Thread yannick
hi guys,

i did some tests a while ago to try to backup open files using bacula on Vista. 
VSS wouldn't work.
I just read the current state of bacula on the web site and it says that VSS is 
currently supported on bacula.
i checked the latest release notes and this is only bit i found about Vista:
kes  Implement a first cut of Vista VSS, using Win2003 code.Can someone confirm 
if bacula supports Vista VSS? has anybody been successful in backing up open 
files on Vista?thanks,kind regards,Yannick-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Command for clearing database and empty the storage?

2007-11-05 Thread Fredrik Gidensköld
Hi all,

I have a bacula installation that I has used for testing. Now I want to clear 
the database (mysql) and storage, are the any commands/scripts for this?

Regards,
Fredrik Gidensköld




-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Tandberg LTO2 (420LTO) good choice for bacula?

2007-11-05 Thread user100
I suppose it should work. We used an external Tandberg LTO2 drive in the 
past with bacula 1.3x for some time (but I´m not sure if it was exactly 
420LTO). We changed to an autoloader later and the LTO2 drive is backing 
up exchange now.

Greetings,
user100


Andreas Krummrich schrieb:
> Hi all,
>
> I have a short question? I'm using bacula 2.2.4 on a Debian etch 
> machine. Now I want to extend the backup-to-disk to backup-to-tape.
>
> Is this tape  a good (Tandberg LTO2 (420LTO) choice and known to work? 
> Any hints or experience reports are welcome!
>
> Kind Regards,
>  Andreas
>
> -
> This SF.net email is sponsored by: Splunk Inc.
> Still grepping through log files to find problems?  Stop.
> Now Search log events and configuration files using AJAX and a browser.
> Download your FREE copy of Splunk now >> http://get.splunk.com/
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users
>   


-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Tandberg LTO2 (420LTO) good choice for bacula?

2007-11-05 Thread Andreas Krummrich
Hi all,

I have a short question? I'm using bacula 2.2.4 on a Debian etch 
machine. Now I want to extend the backup-to-disk to backup-to-tape.

Is this tape  a good (Tandberg LTO2 (420LTO) choice and known to work? 
Any hints or experience reports are welcome!

Kind Regards,
 Andreas

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users