[Bacula-users] No appendable volumes, issue.

2006-02-13 Thread Gert Burger
Morning

As of a week or so ago my bacula installation started giving errors as
follow:

13-Feb 09:12 apiary-sd: Job BackupCatalog.2006-02-13_01.10.00 waiting.
Cannot find any appendable volumes.
Please use the label  command to create a new Volume for:
Storage:  FileStorage
Media type:   File
Pool: Default



I am backing up to a raid system, which is only at 60% capacity at the
moment.

Any ideas of what the issue can be?

Thanks

Gert Burger
Computer Science Department
University of Pretoria
South Africa

My pool is config'ed as follows:

Pool {
  Name = Default
  Pool Type = Backup
  Recycle = yes   # Bacula can automatically recycle
Volumes
  AutoPrune = yes # Prune expired volumes
  Volume Retention = 1 months
  Accept Any Volume = Yes # write on any volume in the pool
  Maximum Volume Jobs = 1
  Maximum Volumes = 200
  #Recycle Oldest Volume = Yes
  #Use Volume Once = Yes
  Label Format = FileBackup-${Client}-job
${JobId}-${Year}-${Month}-${Day}
}



---
This SF.net email is sponsored by: Splunk Inc. Do you grep through log files
for problems?  Stop!  Download the new AJAX search engine that makes
searching your log files as easy as surfing the  web.  DOWNLOAD SPLUNK!
http://sel.as-us.falkag.net/sel?cmd=lnkkid=103432bid=230486dat=121642
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Single Pool, multiple Storage

2006-02-13 Thread Geir Asle Borgen
On Fri, 2006-02-10 at 18:11 -0500, Matthew Butt wrote:
 I'm setting up a backup to disk system (USB external drives), basically
 running Full monthly, Diff weekly, Inc daily, very similar to the system
 here: http://www.bacula.org/rel-manual/Automated_Disk_Backup.html
 
 The difference is that I also need to implement off-site storage for
 disaster situations.  My idea is to have a drive that's always hooked up
 running the daily incrementals and a larger drive that I only bring in
 for the full and differentials.  That way I always have the latest
 incremental backup to hand and always have the full and diff offsite for
 safety.
 
 The obvious way to implement this is have two Device resources, one for
 each USB drive mounted on the backup server.  Whilst you can specify the
 Storage device per job, this doesn't help as my job uses multiple pools
 (Full, Inc, Diff) which is what I'm trying to split out.
 
 Is there a solution or workaround to this?
I have a almost the same case, the solution lies in the Scheduler:
Schedule {
  Name = WeeklyCycle
  Run = Full 1st sun at 1:05
  Run = Full Storage=File-Dracula Pool=Tommy-Full-Tape-Pool 1st sat at
1:05
  Run = Incremental tue-sat at 1:05
}

The 1st Saturday in a mount I override what Storage and Pool to user.

-- 
Phone Work: +47 69212321 / 2321  Phone Priv: +47 69809853 (o_  
+47 99521685 / 3137   //\
ICQ: 48948625   MSN: [EMAIL PROTECTED]   V_/_ i n u x
Registered Linux User # 171776 (http://counter.li.org)


signature.asc
Description: This is a digitally signed message part


Re: Date/Time poll'ing ? (WAS: Re: [Bacula-users] Simulate/Test Scheduler?)

2006-02-13 Thread Martin Simmons
 On Sun, 12 Feb 2006 16:18:01 -0500, Brian A. Seklecki [EMAIL 
 PROTECTED] said:

 On Thu, 2006-02-09 at 22:10 +0100, Wolfgang Denk wrote:
  In message [EMAIL PROTECTED] you wrote:
  =20
   In many cases (i.e. when, during the mv operation, to time stamp of the=
 =20
   testfile inode was modified) /dir1/testfile will not be stored because=20
   it's not recognized as new - it's time stamps are older than the last=20
   full backup.
 =20
  RGHHH It was exactly reasons  like  this  one  that  made  me
  switch  from  my  home-grown  tar  scripts  to  a professional backup
 
 Not to be curt, dotty, or any of that, but if If we could please move
 the discussion back to Testing and Simulation of the Scheduler, as
 well as:
 
 - Testing of Retention Times
 - Testing of Recycling/Pruning
 - Testing of automatic promotion of {Increm/Diferer}ential to Full...
 
 How often does the Director poll the system date time?  How often does
 the Storage director? I don't see any options in the config to control
 this.  Specifically, for periodic maintenance like
 recycling/pruning/volume status flagging.

AFAIK, these maintenance operations are never run on their own.  They are
always triggered by other operations such as a job running or 'status dir'
when Bacula needs to decide which volume to use.

__Martin


---
This SF.net email is sponsored by: Splunk Inc. Do you grep through log files
for problems?  Stop!  Download the new AJAX search engine that makes
searching your log files as easy as surfing the  web.  DOWNLOAD SPLUNK!
http://sel.as-us.falkag.net/sel?cmd=lnkkid=103432bid=230486dat=121642
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] job does not work anymore after upgrade to 1.38

2006-02-13 Thread Sebastian Stark


After upgrading from 1.36.2 to 1.38.3 everything worked perfect but  
one job refuses to backup anything. An upgrade to 1.38.5 did not fix  
this.


The FileSet definition did not change and there are definitely files  
that need to be backed up (new files, changed files..).


The job and fileset definitions are very simple and were not changed  
for a long time. From the list jobs output I can see that right  
after upgrading to 1.38 this particular job had always 0 size. All  
other jobs are okay, even on the same client!


Can someone please give me a hint how to debug this problem?

-Sebastian


*estimate level=incremental listing job=Backup yangtse-clusterhome
Connecting to Client yangtse-fd at yangtse:9102
drwxr-xr-x  75 root kyb   1536 2006-02-13 11:10:34  / 
export/altai1/agbs/cluster


*estimate level=Fulllisting job=Backup yangtse-clusterhome
Connecting to Client yangtse-fd at yangtse:9102
drwxr-xr-x  75 root kyb   1536 2006-02-13 11:10:34  / 
export/altai1/agbs/cluster

2000 OK estimate files=1 bytes=0

yangtse ~ % df -h /export/altai1/agbs/cluster
Filesystem size   used  avail capacity  Mounted on
/dev/dsk/c3t1d1s6  1.4T   781G   621G56%/export/altai1

Job {
  Name = Backup yangtse-clusterhome
  JobDefs = yangtse-clusterhome
  Level = Full
  FileSet = clusterhome set
  Schedule = WeeklyCycle
  Write Bootstrap = /export/altai1/bacula/yangtse-clusterhome.bsr
  Priority = 5
}

FileSet {
Name = clusterhome set
Include {
Options {
signature = MD5
exclude = yes
}
File = /export/altai1/agbs/cluster
}
}



---
This SF.net email is sponsored by: Splunk Inc. Do you grep through log files
for problems?  Stop!  Download the new AJAX search engine that makes
searching your log files as easy as surfing the  web.  DOWNLOAD SPLUNK!
http://sel.as-us.falkag.net/sel?cmd=lnkkid=103432bid=230486dat=121642
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: JobSet, Batch, or Aggregate Concept / Max tape usage per (WAS: Re: [Bacula-users] Hit me w/ a Clue-by-Four (Amanda user))

2006-02-13 Thread Russell Howe
Brian A. Seklecki wrote:
 On Thu, 2006-02-09 at 10:37 +, Russell Howe wrote:
Bacula will do this. Check you don't have Maximum Volume Jobs = 1 in
 
 
 I do.  This is because I'm trying to enforce a specific threshold of a
 single-tape-per-day policy. 

Ah, but this is a single-tape-per-job policy.

 I can manage volume capacity myself.  I'd like to override Bacula's
 default behavior of auto-picking a tape based on status/capacity/recycle
 time.
 
 I'd essentially like to Micro-Manage it perhaps.
 
 I want my tapes to be written to by the scheduled jobs for their daily
 assignment, by all the jobs scheduled to run at that time, and then be
 marked Used.
 
 The trick is, if all the jobs run at the same time, don't mark the
 volume Used or Full until after they've all run.

When it comes to marking tapes as Used or Full, there seem to be two
ways to do it currently:

* Maximum Volume Jobs
* Volume Use Duration

So, you can either say 23 hours after the first job which wrote to this
tape {started,finished}, mark it as used to get you a new tape every
day (I'm not sure which of those two options applies, but I guess it's
in TFM somewhere)

Or, you can say After n jobs have been written to this tape, close it
off/mark it used

The latter is what I do. I run 10 jobs every night, most of them on the
same schedule, so they start at the same time, although they don't run
concurrently - I have Maximum Concurrent Jobs = 1 set for the storage
device, so that jobs get written sequentially.

After the last job completes (the Catalog backup), I get this:

11-Feb 21:43 bacula-dir: Start Backup JobId 26,
Job=BackupCatalog.2006-02-10_23.51.00
11-Feb 21:43 spanky-sd: Volume PoolA_Weekly_1 previously written,
moving to end of data.
11-Feb 21:47 spanky-sd: Ready to append to end of Volume
PoolA_Weekly_1 at file=95.
11-Feb 21:47 bacula-dir: Max Volume jobs exceeded. Marking Volume
PoolA_Weekly_1 as Used.

This does mean that if I change the number of jobs which get written, I
have to remember to update the catalog's record for the media, and
update the Pool definitions in the director's configuration file. I have
this comment in the configuration to remind anyone who might inherit
this setup:

If you're not seeing the above behaviour, I guess there could be some
kind of race condition whereby when two jobs run concurrently, and both
try to incrememnt the number of jobs field in the catalog for the
media table, both jobs update it to n+1 instead of the first updating it
to n+1 and the second adding one to that (making it n+2)... I find it
hard to believe that such a bug would go unnoticed, though... maybe
nobody uses Maximum Volume Jobs with concurrent backups?!

-- 
Russell Howe
[EMAIL PROTECTED]


---
This SF.net email is sponsored by: Splunk Inc. Do you grep through log files
for problems?  Stop!  Download the new AJAX search engine that makes
searching your log files as easy as surfing the  web.  DOWNLOAD SPLUNK!
http://sel.as-us.falkag.net/sel?cmd=lnkkid=103432bid=230486dat=121642
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] job does not work anymore after upgrade to 1.38

2006-02-13 Thread Michel Meyers

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Sebastian Stark wrote:


After upgrading from 1.36.2 to 1.38.3 everything worked perfect but one
job refuses to backup anything. An upgrade to 1.38.5 did not fix this.

The FileSet definition did not change and there are definitely files
that need to be backed up (new files, changed files..).

[...]

FileSet {
Name = clusterhome set
Include {
Options {
signature = MD5
exclude = yes


Not sure whether its related but this one is intriguing me. Why do you
have exclude = yes there?


}
File = /export/altai1/agbs/cluster
}
}


Greetings,
   Michel
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.2 (MingW32) - GPGrelay v0.959

iD8DBQFD8GXb2Vs+MkscAyURAtzUAJ9pc1a3H4XZic28hVxSWvB+sReFjQCeLXIj
EmidSGvRcME18ThGniE7BlE=
=NxiV
-END PGP SIGNATURE-


---
This SF.net email is sponsored by: Splunk Inc. Do you grep through log files
for problems?  Stop!  Download the new AJAX search engine that makes
searching your log files as easy as surfing the  web.  DOWNLOAD SPLUNK!
http://sel.as-us.falkag.net/sel?cmd=lnkkid=103432bid=230486dat=121642
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: JobSet, Batch, or Aggregate Concept / Max tape usage per (WAS: Re: [Bacula-users] Hit me w/ a Clue-by-Four (Amanda user))

2006-02-13 Thread Russell Howe
Russell Howe wrote:
 I have this comment in the configuration to remind anyone who might
 inherit this setup:

# ** VERY IMPORTANT **
# In the database, the media entries have a MaxVolJobs setting. This is
# what determines that a tape is full.
#
# Current values are:
# Cases tapes: 10 (Cases, Cases2, Artemis, Zetafax, Thor, Zeus, Users 
# Depts, Bankside, SQL Server, Catalog)
#
# This is changed much like the above, i.e.
# UPDATE media SET MaxVolJobs = 1 WHERE VolumeName LIKE 'PoolA%';


-- 
Russell Howe
[EMAIL PROTECTED]


---
This SF.net email is sponsored by: Splunk Inc. Do you grep through log files
for problems?  Stop!  Download the new AJAX search engine that makes
searching your log files as easy as surfing the  web.  DOWNLOAD SPLUNK!
http://sel.as-us.falkag.net/sel?cmd=lnkkid=103432bid=230486dat=121642
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: Date/Time poll'ing ? (WAS: Re: [Bacula-users] Simulate/Test Scheduler?)

2006-02-13 Thread Russell Howe
Martin Simmons wrote:
On Sun, 12 Feb 2006 16:18:01 -0500, Brian A. Seklecki [EMAIL 
PROTECTED] said:

How often does the Director poll the system date time?  How often does
the Storage director? I don't see any options in the config to control
this.  Specifically, for periodic maintenance like
recycling/pruning/volume status flagging.
 
 AFAIK, these maintenance operations are never run on their own.  They are
 always triggered by other operations such as a job running or 'status dir'
 when Bacula needs to decide which volume to use.

You can trigger them by running a job of type Admin

-- 
Russell Howe
[EMAIL PROTECTED]


---
This SF.net email is sponsored by: Splunk Inc. Do you grep through log files
for problems?  Stop!  Download the new AJAX search engine that makes
searching your log files as easy as surfing the  web.  DOWNLOAD SPLUNK!
http://sel.as-us.falkag.net/sel?cmd=lnkkid=103432bid=230486dat=121642
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] How to get mount requests before the job runs?

2006-02-13 Thread Chris Dennis

Hello Bacula users

I'm new to Bacula, and beginning to get the hang of it.

I can't find a way to get the operator notified of which tape to mount 
*before* the jobs runs.  For example, if the daily backup is due to run 
at 10pm, the operator needs to get the notification before they go home 
at 5pm.


Or to put it another way, if 'status dir' says

Scheduled Jobs:
Level  Type Pri  Scheduled  Name Volume
===
Full   Backup10  10-Feb-06 18:00KWTest1  KW001

I want the operator to be asked to mount KW001 (if it's not mounted 
already) as soon as the previous job has finished.


If this is possible, can anyone point me to the relevant bit of the 
documentation?


regards

Chris
--
Chris Dennis  [EMAIL PROTECTED]
Fordingbridge, Hampshire, UK


---
This SF.net email is sponsored by: Splunk Inc. Do you grep through log files
for problems?  Stop!  Download the new AJAX search engine that makes
searching your log files as easy as surfing the  web.  DOWNLOAD SPLUNK!
http://sel.as-us.falkag.net/sel?cmd=lnkkid=103432bid=230486dat=121642
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] How to get mount requests before the job runs?

2006-02-13 Thread Thomas Glatthor
Hi Chris,

you can use a cron-job, which mails the output of this script

 #!bin/bash
 ./bconsole -c ./bconsole.conf END_OF_DATA
 status dir
 quit
 END_OF_DATA

to your mail account 60 minutes before you'll go home.

or you can use the

 RunAfterJob  = /etc/bacula/mailmethenextrequiredtapename.sh

directive in the job-resource.


regards

Thomas



Chris Dennis schrieb:
 Hello Bacula users
 
 I'm new to Bacula, and beginning to get the hang of it.
 
 I can't find a way to get the operator notified of which tape to mount
 *before* the jobs runs.  For example, if the daily backup is due to run
 at 10pm, the operator needs to get the notification before they go home
 at 5pm.
 
 Or to put it another way, if 'status dir' says
 
 Scheduled Jobs:
 Level  Type Pri  Scheduled  Name Volume
 ===
 Full   Backup10  10-Feb-06 18:00KWTest1  KW001
 
 I want the operator to be asked to mount KW001 (if it's not mounted
 already) as soon as the previous job has finished.
 
 If this is possible, can anyone point me to the relevant bit of the
 documentation?
 
 regards
 
 Chris
 
 
 ---
 This SF.net email is sponsored by: Splunk Inc. Do you grep through log
 files
 for problems?  Stop!  Download the new AJAX search engine that makes
 searching your log files as easy as surfing the  web.  DOWNLOAD SPLUNK!
 http://sel.as-us.falkag.net/sel?cmd=lnkkid=103432bid=230486dat=121642
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users




---
This SF.net email is sponsored by: Splunk Inc. Do you grep through log files
for problems?  Stop!  Download the new AJAX search engine that makes
searching your log files as easy as surfing the  web.  DOWNLOAD SPLUNK!
http://sel.as-us.falkag.net/sel?cmd=lnkkid=103432bid=230486dat=121642
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] job does not work anymore after upgrade to 1.38

2006-02-13 Thread Sebastian Stark


On 13.02.2006, at 11:56, Michel Meyers wrote:


-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Sebastian Stark wrote:


After upgrading from 1.36.2 to 1.38.3 everything worked perfect  
but one
job refuses to backup anything. An upgrade to 1.38.5 did not fix  
this.


The FileSet definition did not change and there are definitely files
that need to be backed up (new files, changed files..).

[...]

FileSet {
Name = clusterhome set
Include {
Options {
signature = MD5
exclude = yes


Not sure whether its related but this one is intriguing me. Why do you
have exclude = yes there?


This is a remainder from early stages. But this fileset used to work  
with 1.36 for months now. Do you think this could be a problem?



-Sebastian


---
This SF.net email is sponsored by: Splunk Inc. Do you grep through log files
for problems?  Stop!  Download the new AJAX search engine that makes
searching your log files as easy as surfing the  web.  DOWNLOAD SPLUNK!
http://sel.as-us.falkag.net/sel?cmd=lnkkid=103432bid=230486dat=121642
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] job does not work anymore after upgrade to 1.38

2006-02-13 Thread Sebastian Stark


On 13.02.2006, at 12:24, Sebastian Stark wrote:



On 13.02.2006, at 11:56, Michel Meyers wrote:


-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Sebastian Stark wrote:


After upgrading from 1.36.2 to 1.38.3 everything worked perfect  
but one
job refuses to backup anything. An upgrade to 1.38.5 did not fix  
this.


The FileSet definition did not change and there are definitely files
that need to be backed up (new files, changed files..).

[...]

FileSet {
Name = clusterhome set
Include {
Options {
signature = MD5
exclude = yes


Not sure whether its related but this one is intriguing me. Why do  
you

have exclude = yes there?


This is a remainder from early stages. But this fileset used to  
work with 1.36 for months now. Do you think this could be a problem?


Seems like removing exclude = yes solves the problem. At least  
estimate is not returning immediately. Thanks for pointing that out.


Obviously there have been changes to 1.38 that somehow changed the  
way bacula interprets this (admittedly ambiguous) fileset.



-Sebastian


---
This SF.net email is sponsored by: Splunk Inc. Do you grep through log files
for problems?  Stop!  Download the new AJAX search engine that makes
searching your log files as easy as surfing the  web.  DOWNLOAD SPLUNK!
http://sel.as-us.falkag.net/sel?cmd=lnkkid=103432bid=230486dat=121642
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] job does not work anymore after upgrade to 1.38

2006-02-13 Thread Dan Langille
On 13 Feb 2006 at 13:10, Sebastian Stark wrote:

 
 On 13.02.2006, at 12:24, Sebastian Stark wrote:
 
  On 13.02.2006, at 11:56, Michel Meyers wrote:
 
  Sebastian Stark wrote:
 
  After upgrading from 1.36.2 to 1.38.3 everything worked perfect  
  but one
  job refuses to backup anything. An upgrade to 1.38.5 did not fix  
  this.
 
  The FileSet definition did not change and there are definitely files
  that need to be backed up (new files, changed files..).
  [...]
  FileSet {
  Name = clusterhome set
  Include {
  Options {
  signature = MD5
  exclude = yes
 
  Not sure whether its related but this one is intriguing me. Why do  
  you
  have exclude = yes there?
 
  This is a remainder from early stages. But this fileset used to  
  work with 1.36 for months now. Do you think this could be a problem?
 
 Seems like removing exclude = yes solves the problem. At least  
 estimate is not returning immediately. Thanks for pointing that out.
 
 Obviously there have been changes to 1.38 that somehow changed the  
 way bacula interprets this (admittedly ambiguous) fileset.

I was going to suggest reading the release notes for possible changes 
to FileSet directives.

-- 
Dan Langille : Software Developer looking for work
my resume: http://www.freebsddiary.org/dan_langille.php




---
This SF.net email is sponsored by: Splunk Inc. Do you grep through log files
for problems?  Stop!  Download the new AJAX search engine that makes
searching your log files as easy as surfing the  web.  DOWNLOAD SPLUNK!
http://sel.as-us.falkag.net/sel?cmd=lnkkid=103432bid=230486dat=121642
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] job does not work anymore after upgrade to 1.38

2006-02-13 Thread Sebastian Stark


On 13.02.2006, at 13:52, Dan Langille wrote:


On 13 Feb 2006 at 13:10, Sebastian Stark wrote:



On 13.02.2006, at 12:24, Sebastian Stark wrote:


On 13.02.2006, at 11:56, Michel Meyers wrote:


Sebastian Stark wrote:


After upgrading from 1.36.2 to 1.38.3 everything worked perfect
but one
job refuses to backup anything. An upgrade to 1.38.5 did not fix
this.

The FileSet definition did not change and there are definitely  
files

that need to be backed up (new files, changed files..).

[...]

FileSet {
Name = clusterhome set
Include {
Options {
signature = MD5
exclude = yes


Not sure whether its related but this one is intriguing me. Why do
you
have exclude = yes there?


This is a remainder from early stages. But this fileset used to
work with 1.36 for months now. Do you think this could be a problem?


Seems like removing exclude = yes solves the problem. At least
estimate is not returning immediately. Thanks for pointing that out.

Obviously there have been changes to 1.38 that somehow changed the
way bacula interprets this (admittedly ambiguous) fileset.


I was going to suggest reading the release notes for possible changes
to FileSet directives.


I read the ReleaseNotes and found nothing regarding this, but in the  
ChangeLog I found this innocent looking comment:


Changes to 1.37.7:
[...]
- Remove old style Include/Excludes.
[...]

I'll read a diff of the code before the next upgrade! :)


---
This SF.net email is sponsored by: Splunk Inc. Do you grep through log files
for problems?  Stop!  Download the new AJAX search engine that makes
searching your log files as easy as surfing the  web.  DOWNLOAD SPLUNK!
http://sel.as-us.falkag.net/sel?cmd=lnkkid=103432bid=230486dat=121642
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] job does not work anymore after upgrade to 1.38

2006-02-13 Thread Michel Meyers

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Dan Langille wrote:

On 13 Feb 2006 at 13:10, Sebastian Stark wrote:


On 13.02.2006, at 12:24, Sebastian Stark wrote:

On 13.02.2006, at 11:56, Michel Meyers wrote:

Sebastian Stark wrote:

After upgrading from 1.36.2 to 1.38.3 everything worked perfect
but one
job refuses to backup anything. An upgrade to 1.38.5 did not fix
this.

The FileSet definition did not change and there are definitely files
that need to be backed up (new files, changed files..).

[...]

FileSet {
Name = clusterhome set
Include {
Options {
signature = MD5
exclude = yes

Not sure whether its related but this one is intriguing me. Why do
you
have exclude = yes there?

This is a remainder from early stages. But this fileset used to
work with 1.36 for months now. Do you think this could be a problem?

Seems like removing exclude = yes solves the problem. At least
estimate is not returning immediately. Thanks for pointing that out.

Obviously there have been changes to 1.38 that somehow changed the
way bacula interprets this (admittedly ambiguous) fileset.


I was going to suggest reading the release notes for possible changes
to FileSet directives.


Indeed, I had a quick glance at them but confused 1.36.2 and 1.36.3
(thinking we were already talking about 1.36.3 which has the FileSet
changes from 1.38.0 backported).

Greetings,
   Michel



-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.2 (MingW32) - GPGrelay v0.959

iD8DBQFD8IPi2Vs+MkscAyURAhr3AJ43BoeOLPXq+/98Fsl7ej8ZSKNPQACgmLE8
CGaL0y28DDWi1QSQspy5nhc=
=nuDo
-END PGP SIGNATURE-


---
This SF.net email is sponsored by: Splunk Inc. Do you grep through log files
for problems?  Stop!  Download the new AJAX search engine that makes
searching your log files as easy as surfing the  web.  DOWNLOAD SPLUNK!
http://sel.as-us.falkag.net/sel?cmd=lnkkid=103432bid=230486dat=121642
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Re: [Bacula-devel] Encryption Status

2006-02-13 Thread Dan Langille
On 12 Feb 2006 at 13:49, Landon Fuller wrote:

 Dan Langille wrote:
  On 5 Feb 2006 at 18:33, Landon Fuller wrote:
  
  
 In the spirit of status reports -- Bacula's File Daemon now has complete 
 support for signing and encryption data prior to sending it to the 
 Storage Daemon, and decrypting said data upon receipt from the Storage 
 Daemon.
  
  
  Now that's pretty cool!
 
 Thanks!
 
  How does it work?  Just simple public key encryption type thing?
 
 Right - a session key is randomly generated for each backup job, and 
 that session key is then encrypted using recipients' public keys. By 
 specifying multiple public keys, one can specify multiple recipients 
 that may decrypt the backup.

Multiple decryption keys is a nice feature.

 The downside to the current implementation is that a copy of the 
 encrypted session keys is saved with *every file*. The size of this data 
 structure isn't huge, but it does add up. If I recall correctly, with a 
 single 2048 bit key the on disk structure was ~280 bytes. For 100,000 
 files, that adds up to 26 megabytes.

100,000 files is a big backup anyway.  So another 26MB isn't much.  
I'm backing up to DLT 7000...  26MB is less than 1% of a tape.

 I'm not sure how much of an issue this is for potential users of FD-side 
 data encryption. The upside is that the backups are much more resilient 
 to tape/disk corruption.

I think it's pretty minimal.  The first time they get a bad spot on a 
tape, they'll be glad to have it.

 As far as generating the keys, I'm just using self-signed PEM encoded 
 x509 certificates and private keys. They be generated with a couple 
 openssl(1) one-liners.
 
 One other issue worth raising -- The director can currently overwrite 
 any file on the FD, including the encryption keys or the FD 
 configuration file, thus exposing private data to the director.

Overwriting is not something I'd thought of.  Perhaps it's time to 
come up with some FD-side restrictions:

Protected {
NoAccess = /path/to/my/keys
}

This is a directory level directive.  The FD will neither read nor 
write anything under that directory.  This cannot be overridden by 
the Director.

 I thought that we could solve this by either:
 1) Provide a Allow Restore flag that allows one to disable 
 restoration until it's actually needed, or
 2) Provide a Restore Root directive that allows the specification 
 of a restoration root under which all restored files must live. I prefer 
 this option, but 1) is certainly easier.

I like #1.  It allows a user to have confidence that files cannot be 
updated without their knowledge.  #2 means that the FD specifies the 
Where parameter that we see in the restore command.  The DIR would 
not be able to override this parameter.

  I've been using the TLS feature for a few weeks now.  I'm pretty 
  happy with it.  It's been working every time withojut fail.   No more 
  stunnel for me.
 
 Glad to hear it. We've been using the TLS implementation ever since the 
   tail-end of the 1.37 release cycle, zero problems. One feature I'd 
 really like to see is the implementation of SSH-style public key 
 validation -- caching the public key on first connect and then 
 validating against the cache. It'd be a nice alternative to maintaining 
 x509 infrastructure. I don't know when I'll have time to tackle the 
 problem, but if anyone is interested in the project I'd be happy to 
 provide some pointers. It's not too hard to do ...

SSH keys would be a nice option, I agree.  Many people are familiar 
with SSH keys and it's one less thing for them to learn.


-- 
Dan Langille : Software Developer looking for work
my resume: http://www.freebsddiary.org/dan_langille.php




---
This SF.net email is sponsored by: Splunk Inc. Do you grep through log files
for problems?  Stop!  Download the new AJAX search engine that makes
searching your log files as easy as surfing the  web.  DOWNLOAD SPLUNK!
http://sel.as-us.falkag.net/sel?cmd=lnkkid=103432bid=230486dat=121642
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Re: [Bacula-devel] Encryption Status

2006-02-13 Thread Dan Langille
On 12 Feb 2006 at 14:07, Landon Fuller wrote:

 Landon Fuller wrote:
  One other issue worth raising -- The director can currently overwrite 
  any file on the FD, including the encryption keys or the FD 
  configuration file, thus exposing private data to the director.
 
 Something else I forgot to mention; the file daemon also ensures data 
 integrity by signing each file. Currently, only file data is signed -- 
 permissions, ownership, et al are not.

Signing it is nice.  Does any signature verification occur?  e.g. the 
SD as it receives the data during backup?  Before SD sends during 
restore?  When the FD receives during restore?

 AFAIK, during a restore, the storage daemon will provide the stream
 data in the same order it was written by the file daemon. If that's
 true, the easiest way to add extra file attributes/streams to the
 signature is to checksum them as we send them to (and receive them
 from) the storage daemon. 

After checksum'ing them, what would you do with them?

 Kern, is it reasonable to assume that the Storage Daemon will always 
 provide per-file stream data in the order it was written by the File 
 Daemon? If not, I'd guess the alternative is to cache the file 
 attributes on restore and checksum them in the standard order.

What would happen if we received out-of-order packets?

-- 
Dan Langille : Software Developer looking for work
my resume: http://www.freebsddiary.org/dan_langille.php




---
This SF.net email is sponsored by: Splunk Inc. Do you grep through log files
for problems?  Stop!  Download the new AJAX search engine that makes
searching your log files as easy as surfing the  web.  DOWNLOAD SPLUNK!
http://sel.as-us.falkag.net/sel?cmd=lnkkid=103432bid=230486dat=121642
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Volume Status - Options: Differences

2006-02-13 Thread Christoff Buch

Hello there,

(on the bconsole) by doing: update -
Volume parameters - (choosing pool and entering media id), then choosing
Volume Status you can get to:

Possible Values are:
  1: Append
  2: Archive
  3: Disabled
  4: Full
  5: Used
  6: Cleaning
  7: Read-Only

Question is:

What are the exact differences between
options:

2: Archive, 3: Disabled and 7: Read-Only???

Maybe I am stupid, but I couldn't work
it out of the manual...
Thing is, we have to take a monthly
tape set out of the backup-cycle in order to store it in a safe (internal
company-made rules and stuff...), but can't decide on a option to set the
tapes to.
Would be very happy if anyone can help!
Thanks a lot!


Mit freundlichem Gruß

i. A. Christoff Buch

=
[EMAIL PROTECTED]
OneVision Software AG
Dr.-Leo-Ritter-Str. 9
93049 Regensburg

Re: [Bacula-users] Re: [Bacula-devel] Encryption Status

2006-02-13 Thread Phil Stracchino
Dan Langille wrote:
 On 12 Feb 2006 at 14:07, Landon Fuller wrote:
Kern, is it reasonable to assume that the Storage Daemon will always 
provide per-file stream data in the order it was written by the File 
Daemon? If not, I'd guess the alternative is to cache the file 
attributes on restore and checksum them in the standard order.
 
 
 What would happen if we received out-of-order packets?

One assumes this would be taken care of in the TCp stack, unless I'm
misunderstanding the question.

-- 
 Phil Stracchino   [EMAIL PROTECTED]
Renaissance Man, Unix generalist, Perl hacker
 Mobile: 603-216-7037 Landline: 603-886-3518


---
This SF.net email is sponsored by: Splunk Inc. Do you grep through log files
for problems?  Stop!  Download the new AJAX search engine that makes
searching your log files as easy as surfing the  web.  DOWNLOAD SPLUNK!
http://sel.as-us.falkag.net/sel?cmd=lnkkid=103432bid=230486dat=121642
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Re: [Bacula-devel] Encryption Status

2006-02-13 Thread Dan Langille
On 13 Feb 2006 at 9:02, Phil Stracchino wrote:

 Dan Langille wrote:
  On 12 Feb 2006 at 14:07, Landon Fuller wrote:
 Kern, is it reasonable to assume that the Storage Daemon will always 
 provide per-file stream data in the order it was written by the File 
 Daemon? If not, I'd guess the alternative is to cache the file 
 attributes on restore and checksum them in the standard order.
  
  
  What would happen if we received out-of-order packets?
 
 One assumes this would be taken care of in the TCp stack, unless I'm
 misunderstanding the question.

That's the question I'm posing.  Does the TCP stack handle that, or 
does Bacula?

-- 
Dan Langille : Software Developer looking for work
my resume: http://www.freebsddiary.org/dan_langille.php




---
This SF.net email is sponsored by: Splunk Inc. Do you grep through log files
for problems?  Stop!  Download the new AJAX search engine that makes
searching your log files as easy as surfing the  web.  DOWNLOAD SPLUNK!
http://sel.as-us.falkag.net/sel?cmd=lnkkid=103432bid=230486dat=121642
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] bacula stop

2006-02-13 Thread Enrique de la Torre Gordaliza

Hello,

I'm creating a shutdown script for my UPS software on the bacula server. First 
of all I stop director, storage and file daemons. Then I stop mysql daemon. 
But, what could happen if a job is running when I make director stop? Should 
I issue a cancel command to the console before? How can I cancel all the 
jobs, running and scheduled? 

Thanks in advance

Enrique  
-- 
Enrique de la Torre Gordaliza   
Departamento de Arquitectura de Computadores y Automática
Facultad de CC. Físicas, UCM
Av. Complutense s/n C.P:28040 

email: [EMAIL PROTECTED]
tlfn: 913944389


---
This SF.net email is sponsored by: Splunk Inc. Do you grep through log files
for problems?  Stop!  Download the new AJAX search engine that makes
searching your log files as easy as surfing the  web.  DOWNLOAD SPLUNK!
http://sel.as-us.falkag.net/sel?cmd=lnkkid3432bid#0486dat1642
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] bacula stop

2006-02-13 Thread Dan Langille
On 13 Feb 2006 at 15:06, Enrique de la Torre Gordaliza wrote:

 I'm creating a shutdown script for my UPS software on the bacula
 server. First of all I stop director, storage and file daemons. Then I
 stop mysql daemon. But, what could happen if a job is running when I
 make director stop? Should I issue a cancel command to the console
 before? How can I cancel all the jobs, running and scheduled? 

Why not let your normal system shutdown handle this issue?  Nothing 
special needs to be done as far as I know.


-- 
Dan Langille : Software Developer looking for work
my resume: http://www.freebsddiary.org/dan_langille.php




---
This SF.net email is sponsored by: Splunk Inc. Do you grep through log files
for problems?  Stop!  Download the new AJAX search engine that makes
searching your log files as easy as surfing the  web.  DOWNLOAD SPLUNK!
http://sel.as-us.falkag.net/sel?cmd=lnkkid=103432bid=230486dat=121642
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Strange info-message [Fwd: Bacula: *none* *none* of *none* *none*]

2006-02-13 Thread Dirk große Osterhues
Hi,

I experience one annoying thing in a backup-system maintained by me. I
can't get rid of the message quoted below with the forwarded strange
subject:

 13-Feb 07:30 digo-dir: Pruned 1 Job on Volume Daily-Tape_0001 from catalog.
 13-Feb 07:30 digo-dir: Pruned 3 Jobs on Volume Daily-Tape_0002 from catalog.
 13-Feb 07:30 digo-dir: Recycled volume Daily-Tape_0002

I tried all message resources and it went out that this message is
generated by the info-resource. Is there a way to disable that
information message?

Thanks for any hints,
Dirk

-- 
Dirk große Osterhues [EMAIL PROTECTED]



---
This SF.net email is sponsored by: Splunk Inc. Do you grep through log files
for problems?  Stop!  Download the new AJAX search engine that makes
searching your log files as easy as surfing the  web.  DOWNLOAD SPLUNK!
http://sel.as-us.falkag.net/sel?cmd=lnkkid3432bid#0486dat1642
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: Date/Time poll'ing ? (WAS: Re: [Bacula-users] Simulate/Test Scheduler?)

2006-02-13 Thread Brian A. Seklecki
 {Increm/Diferer}ential to Full...
  
  How often does the Director poll the system date time?  How often does
  the Storage director? I don't see any options in the config to control
  this.  Specifically, for periodic maintenance like
  recycling/pruning/volume status flagging.
 
 AFAIK, these maintenance operations are never run on their own.  They are
 always triggered by other operations such as a job running or 'status dir'

It would be nice if there was some signal or other mechanism to trigger
it.   For example, if a volume is scheduled to recycle 12 hours after
the job run (or after X jobs), I might want to verify that tonight's
volume has been pruned/recycled, last night's is marked used, and
things look good for tonight's backups before I run out to the pub.

I guess I can script a non-interactive cron job that issues status dir
every few hours and bumps it off to me.

I'm used to Amanda's amstatus e-mailing me at 4:45 every day :}

Thanks for the help!

~lava

 when Bacula needs to decide which volume to use.
 
 __Martin


smime.p7s
Description: S/MIME cryptographic signature


[Bacula-users] Strange problem going to firewall

2006-02-13 Thread Ger Apeldoorn
Hi all,

I'm running the latest bacula, and everything works fine when I backup on the 
LAN.

However, I need to backup a server on the DMZ. This is what I did;

1) Install the file daemon on the dmzserver.
2) opened port 9101-9103 in the firewall for traffic going from DMZ, to the 
LAN.
3) configured both the FD and DIR.

When I start a job, it seems to run fine. After a while (about 580 MB) it 
loses the connection. Here's what I get in Console:

13-Feb 15:25 beagle-dir: dmzsvr.2006-02-13_15.20.49 Fatal error: Network error 
with FD during Backup: ERR=Connection reset by peer
13-Feb 15:26 beagle-dir: dmzsvr.2006-02-13_15.20.49 Fatal error: No Job status 
returned from FD.
13-Feb 15:26 beagle-dir: dmzsvr.2006-02-13_15.20.49 Error: Bacula 1.38.5 
(18Jan06): 13-Feb-2006 15:26:22
  JobId:                  182
  Job:                    dmzserver.2006-02-13_15.20.49
  Backup Level:           Full (upgraded from Differential)
  Client:                 dmzsvr-fd i686-redhat-linux-gnu,redhat,Enterprise 
3.0
  FileSet:                dmzsvr 2006-02-13 10:02:46
  Pool:                   NAS-Files
  Storage:                FileNAS
  Scheduled time:         13-Feb-2006 15:20:37
  Start time:             13-Feb-2006 15:20:51
  End time:               13-Feb-2006 15:26:22
  Priority:               10
  FD Files Written:       0
  SD Files Written:       0
  FD Bytes Written:       0
  SD Bytes Written:       0
  Rate:                   0.0 KB/s
  Software Compression:   None
  Volume name(s):         
  Volume Session Id:      6
  Volume Session Time:    1139503850
  Last Volume Bytes:      1,111,287,193
  Non-fatal FD errors:    0
  SD Errors:              0
  FD termination status:  Error
  SD termination status:  Running
  Termination:            *** Backup Error ***


Note that it says SD Bytes written = 0. If I cancel the job somewhere, this is 
 0 (Depending on how long I wait several hundreds of MB's.)
This space also gets allocated on the storage, but only in the event of a 
timely cancellation.

Looks to me that it loses connection when the job either completes or reaches 
another threshold. 

I've tried to configure a heartbeat Interval = 5 minutes on the FD, but to no 
avial.

Any help very much appreciated!

Many thanks,
Ger Apeldoorn.


---
This SF.net email is sponsored by: Splunk Inc. Do you grep through log files
for problems?  Stop!  Download the new AJAX search engine that makes
searching your log files as easy as surfing the  web.  DOWNLOAD SPLUNK!
http://sel.as-us.falkag.net/sel?cmd=lnkkid3432bid#0486dat1642
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Fileset help

2006-02-13 Thread Martin Simmons
 On Sat, 11 Feb 2006 22:32:24 -0800, Mike [EMAIL PROTECTED] said:
 
 I'm trying to create a FileSet config for one of our FreeBSD jail host
 machines, and I'm having a little trouble getting it just right.
 
 I have a directory (/u/jail) which has serveral instances of a FreeBSD jail
 (basically a virtual server), all under different directories (using their
 hostnames)
 
 So,
 
 /u/jail/hostname1/
 /u/jail/hostname2/
 
 etc,
 
 We add new hosts to these quite often, so I'd rather not have to specify
 each jail hostname by themselves, but I only want to backup certain
 directories under the jails.
 
 The goals is to backup:
 
 /u/jail/*/etc
 /u/jail/*/usr/local
 /u/jail/*/u
 
 but exclude
 
 /u/jail/*/u/logs
 
 I attempted to use this- not sure if it's right or not, but it looks like it
 was still trying to backup something outside of my config, so I'm guessing
 it's not.
 
 --snip--
 
 FileSet {
 Name = cust
 Include {
 Options {
 Signature = MD5;
 onefs = no;
 
 wilddir = /u/jail/*/etc
 wilddir = /u/jail/*/usr/local
 wilddir = /u/jail/*/u
 }
 Options {
 wilddir = /u/jail/*/u/logs
 Exclude = yes
 }
 
 File = /etc
 File = /usr/local
 File = /u/home
 
 File = /u/jail
 }
 }
 
 --snip--
 
 This is bacula-1.38.2 on FreeBSD 5.4
 
 Any help would be appreciated.

I think there are two problems:

1) Bacula stops looking for Options clauses as soon as one with a pattern
   matches, so /u/jail/*/u overrides /u/jail/*/u/logs and the logs will be
   included.

2) Filenames that match none of the patterns will match the implicit default
   Options clause and still be allowed (e.g. /u/jail/*/bin).

__Martin


---
This SF.net email is sponsored by: Splunk Inc. Do you grep through log files
for problems?  Stop!  Download the new AJAX search engine that makes
searching your log files as easy as surfing the  web.  DOWNLOAD SPLUNK!
http://sel.as-us.falkag.net/sel?cmd=lnkkid=103432bid=230486dat=121642
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Strange problem going to firewall

2006-02-13 Thread Brian A. Seklecki
Every firewall has a different timeout for TCP sockets.  I would
investigate what the maximum NAT/PAT time duration is on yours.  Enable
verbose debugging to see on what condition it's tearing down the state
table/translation.

Also, check a state table thresholds for timeouts/max durations/max
byte counts, etc.

pfctl -vvs state in OpenBSD PF, show xlate in Cisco PIX, ipfstat
-l in NetBSD IPFilter, I don't know about FreeBSD/Linux
IPTables/Checkpoint/Sonicwall/Raptor/or any of that, but it's all the
same concept.

~lava

On Mon, 2006-02-13 at 16:17 +0100, Ger Apeldoorn wrote:
 I'm running the latest bacula, and everything works fine when I backup
 on the 


smime.p7s
Description: S/MIME cryptographic signature


[Bacula-users] I/O errors on gentoo between 1.36 client and 1.38 server // missing current ebuild

2006-02-13 Thread Frank Altpeter
Hi list,

First of all a little question to whom it does concern: Who is the
responsible person to update the gentoo portage tree with a current
bacula ebuild?
Currently it looks like there is only bacula-1.36.3-r2.ebuild while
1.38.5 should be available by now.

But the more important question: Is it possible that a client running
1.36.3-r2 can produce I/O errors while doing a full backup against a
1.38.4 server?

I'm having exactly this problem that a gentoo server does crash at
night when doing a full backup, and it requires a cold start
afterwards.

A sample of the errors:

EXT3-fs error: (device sda3): ext3_find_entry: reading directory
#0241205 offset 0
This happens some thousand times on the console and after that,
nothing works anymore, including a simple ls or reboot command.

I'm not sure but currently i think that this error only occurs since
updating the server from 1.36 to 1.38 and if i remember correctly,
there have been massive changes in I/O handling...

So - can someone provide an ebuild for 1.38.x please ? :)


--
Le deagh dhùraghd,

Frank Altpeter

Two of the most famous products of Berkeley are LSD and Unix.
I don't think that this is a coincidence.
-- Anonymous


---
This SF.net email is sponsored by: Splunk Inc. Do you grep through log files
for problems?  Stop!  Download the new AJAX search engine that makes
searching your log files as easy as surfing the  web.  DOWNLOAD SPLUNK!
http://sel.as-us.falkag.net/sel?cmd=lnkkid3432bid#0486dat1642
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Strange problem going to firewall

2006-02-13 Thread Ger Apeldoorn
Hi brian,

Thanks for your reply, please see below,

On Monday 13 February 2006 16:29, Brian A. Seklecki wrote:
 Every firewall has a different timeout for TCP sockets.  I would
 investigate what the maximum NAT/PAT time duration is on yours.  
I'm not using NAT or PAT, only 'real' hosts. The TCP connection inactivity 
timeout in the firewall-rule is defaulted to 60 minutes.

 Enable 
 verbose debugging to see on what condition it's tearing down the state
 table/translation.

Can you tell me how to do that, I've tried to put the client in debugging mode 
through the console, but I never got any messages.


 Also, check a state table thresholds for timeouts/max durations/max
 byte counts, etc.

I'm using a sonicwall, but I cannot find anything about a 'state table'.. 

Thanks,
Ger.


---
This SF.net email is sponsored by: Splunk Inc. Do you grep through log files
for problems?  Stop!  Download the new AJAX search engine that makes
searching your log files as easy as surfing the  web.  DOWNLOAD SPLUNK!
http://sel.as-us.falkag.net/sel?cmd=lnkkid=103432bid=230486dat=121642
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: JobSet, Batch, or Aggregate Concept / Max tape usage per (WAS: Re: [Bacula-users] Hit me w/ a Clue-by-Four (Amanda user))

2006-02-13 Thread Alan Brown

On Sun, 12 Feb 2006, Brian A. Seklecki wrote:


Bacula will do this. Check you don't have Maximum Volume Jobs = 1 in


I do.  This is because I'm trying to enforce a specific threshold of a
single-tape-per-day policy.


Wrong directive.

Try Volume Use Duration = 23 hours

in the pool definition.



---
This SF.net email is sponsored by: Splunk Inc. Do you grep through log files
for problems?  Stop!  Download the new AJAX search engine that makes
searching your log files as easy as surfing the  web.  DOWNLOAD SPLUNK!
http://sel.as-us.falkag.net/sel?cmd=lnkkid=103432bid=230486dat=121642
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Possibility to choose catalog from job instead of client

2006-02-13 Thread Wanderson Berbert


   it is possible to choose witch calalog will be used from job instead 
of client or use the job to override the definition in the client?

   I d like separate calatogs from diferents jobs.

thanks

--
Wanderson Berbert
Gerente de Informática
Sermap Comércio Indústria e Serviços Ltda


--
Esta mensagem foi verificada pelo sistema de antivírus e
acredita-se estar livre de perigo.

begin:vcard
fn:Wanderson Berbert
n:Berbert;Wanderson
email;internet:[EMAIL PROTECTED]
note;quoted-printable:Gerente de inform=C3=A1tica=0D=0A=
	Sermap Com=C3=A9rcio Ind=C3=BAstria e Servi=C3=A7os Ltda
x-mozilla-html:TRUE
version:2.1
end:vcard



Re: JobSet, Batch, or Aggregate Concept / Max tape usage per (WAS: Re: [Bacula-users] Hit me w/ a Clue-by-Four (Amanda user))

2006-02-13 Thread Brian A. Seklecki
On Mon, 2006-02-13 at 10:42 +, Russell Howe wrote:
 When it comes to marking tapes as Used or Full, there seem to be two
 ways to do it currently:
 
 * Maximum Volume Jobs
 * Volume Use Duration
 
 So, you can either say 23 hours after the first job which wrote to
 this
 tape {started,finished}, mark it as used to get you a new tape every
 day (I'm not sure which of those two options applies, but I guess it's
 in TFM somewhere)
 
 Or, you can say After n jobs have been written to this tape, close
 it
 off/mark it used 


Ahhhand this means after Max Retention passes, the tapes get flagged
recycle and so there's always one tape marked recycle/append and the
rest are marked used?

This is definately good material for the Backup Strategy chapter in
the documentation.

~lava 

 
 The latter is what I do. I run 10 jobs every night, most of them on the
 same schedule, so they start at the same time, although they don't run



smime.p7s
Description: S/MIME cryptographic signature


Re: [Bacula-users] Simulate/Test Scheduler?

2006-02-13 Thread Brian A. Seklecki
On Wed, 2006-02-08 at 09:02 +0100, Arno Lehmann wrote:
 Hello,
 
 On 2/8/2006 6:54 AM, Brian A. Seklecki wrote:
  Is there any built-in mechanism to test Schedule {} planning?  I need to
  verify the behavior will work as expected.
 
 If you trust Bacula itself, use the show job=xxx command. Part of the 
 output here:

Also, here's an example of what I mean when I say Simulation, and I
suspect the developers might have a similar testing approach, it's just
not documented/adopted.

Overall approach:

*) Use File or Disk based meta-tapes (Volumes) to save
   auto-changer / manual intervention time
*) Designate your jobs to start at some even point in time in the
   future.
*) Manually label your volumes; make a copy of them with just the header
   written.
*) Flush the database, re-create it from scratch
*) PGDump and reload the database records of interest (media table)
*) Run the FD/SD/DIR in foreground mode with -f -d100
*) Move the clock forward in increments of 12 hours using a for [] loop
   in bourne shell

-

Thus, to simulate a new schedule:

1) Kill the DIR process
2) rm(1) the tape volume data files
3) sh drop_postgresql_tables ; sh drop_postgresql_database 
4) cp(1)/tar(1) the original tape volume data files back into place
5) sh make_postgresql_tables ; sh grant_postgresql_privileges
6) Create the DB entries to accompany the volumes:

INSERT INTO media VALUES (1, 'CFusionDaily0', 0, 1, 'File', 0, NULL,
NULL, '2006-02-12 20:30:23', 0, 0, 0, 0, 1, 0, 0, 0, 0, 'Append', 1,
518400, 82800, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0);

INSERT INTO media VALUES (2, 'CFusionDaily1', 0, 1, 'File', 0, NULL,
NULL, '2006-02-12 20:30:23', 0, 0, 0, 0, 1, 0, 0, 0, 0, 'Append', 1,
518400, 82800, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0);
[]

7) Set the date: 
   # date 200602131149.45
   Sun Feb 12 21:59:45 EST 2006
8) Set the schedule for noon
9) Start the dir with -f -d100
10) Simulate status update poll'ing with:
# while true; do echo -n status dir | bconsole; sleep 10; done
11) Move the the time ahead in increments and test your schedule /
rotation policy


smime.p7s
Description: S/MIME cryptographic signature


[Bacula-users] Backing up PostgreSQL using Bacula and a fifo

2006-02-13 Thread Marcel Gsteiger
Hi all

After a few hours of troubleshooting, I finally have my PostgreSQL database 
backup running.  My backup job creates separate schema/data backups for each 
database, along with a separate backup of global objects. This is much easier 
and safer to handle than all-in-one-file backups. Moreover, my scripts backup 
the data through pipes. So there is no need for additional disk space for large 
database backups. The script automatically determines which databases are on 
the server. My server is a CentOS 4.2 w/SELinux enabled running bacula-1.38.5-4 
with postgresql backend on a x86_64 dual xeon box with a ultrium-3 tape 
attached. The data is always spooled through a separate RAID 0 array (the tape 
is too fast for my other disks). My postgresql is 8.1.2, but my scripts should 
also work with versions = 7.3 or perhaps 7.4 .

I hope this is useful for somebody else, too.

regards
--Marcel

First, create the directory /var/lib/pgsql/data/dump and 
/var/lib/pgsql/data/dump/fifo , chown postgres:bacula, chmod 750.
Ensure that the database user postgres running on the local host has trust 
access to all databases (no passwords needed). This script also works for 
backup of remote databases, but ensure that access rights are set properly.

If you prefer to have a password, you can uncomment the lines
EXPORT PGPASSWORD=

in my scripts.

Create these files:

/etc/bacula/make_database_backup: owner root, group postgres, chmod g+x:
FILE
#!/bin/sh
exec  /dev/null
DUMPDIR=/var/lib/pgsql/data/dump
FIFODIR=$DUMPDIR/fifo
export PGUSER=postgres
#export PGPASSWORD=   # only when pg_hba.conf requires 
it
/usr/bin/pg_dumpall -g $DUMPDIR/globalobjects.dump   # hopefully never a big 
file, so no need for a fifo
rm -f $FIFODIR/*.data.dump
for dbname in `psql -d template1 -q -t EOF
select datname from pg_database where not datname in ('bacula','template0') 
order by datname;
EOF
` 
do
 mkfifo $FIFODIR/$dbname.schema.dump
 /usr/bin/pg_dump --format=c -a $dbname --file=$FIFODIR/$dbname.schema.dump 
21  /dev/null  
 mkfifo $FIFODIR/$dbname.data.dump 
 /usr/bin/pg_dump --format=c -a $dbname --file=$FIFODIR/$dbname.data.dump 21 
 /dev/null 
done
/FILE

/etc/bacula/delete_database_backup: owner root, group postgres, chmod g+x:
FILE
#!/bin/sh
DUMPDIR=/var/lib/pgsql/data/dump
FIFODIR=$DUMPDIR/fifo
for dbname in `psql -U postgres -d template1 -q -t EOF
select datname from pg_database where not datname in ('bacula','template0') 
order by datname;
EOF
` 
do
 rm -f $FIFODIR/$dbname.schema.dump
 rm -f $FIFODIR/$dbname.data.dump
done
rm -f $DUMPDIR/globalobjects.dump
/FILE

use this helper file to determine the backups needed: /etc/bacula/listdbdump
FILE
#!/bin/sh
FIFODIR=/var/lib/pgsql/data/dump/fifo
for dbname in `psql -d template1 -q -U postgres -h $1 -p $2 -t  EOF
select datname from pg_database where not datname in ('bacula','template0') 
order by datname;
EOF
`
do
 echo $FIFODIR/$dbname.schema.dump
 echo $FIFODIR/$dbname.data.dump
done
/FILE

create these entries in bacula-dir.conf:

Job {
  Name = hymost-db
  JobDefs = DefaultJob
  Level = Full
  FileSet=myhost-db   
  Client = myhost-fd   
  Schedule = WeeklyCycleAfterBackup
  # This creates a backup of the databases with pg_dump to fifos
  Client Run Before Job = su - postgres -c 
\/etc/bacula/make_database_backup\ 
  # This deletes the backup and fifo files
  Client Run After Job  = su - postgres -c 
\/etc/bacula/delete_database_backup\
  Priority = 17   # run after main backup
}

FileSet {
  Name = myhost-db   
  Include {
 Options {
signature = MD5
readfifo = yes   
 }
   File = /var/lib/pgsql/data/dump/globalobjects.dump
   File = |/etc/bacula/listdbdump myhost-or-ip.mynet.lan 5432
  }
}




---
This SF.net email is sponsored by: Splunk Inc. Do you grep through log files
for problems?  Stop!  Download the new AJAX search engine that makes
searching your log files as easy as surfing the  web.  DOWNLOAD SPLUNK!
http://sel.as-us.falkag.net/sel?cmd=lnkkid3432bid#0486dat1642
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Bacula dell PowerVault 110T LTO2 drive

2006-02-13 Thread romje
Hi all,
I'd like to buy a LTO 2 drive from Dell (110T) it seems to work fine
on Linux  I'd just want to check that it works with Bacula ? running an
Ubuntu Linux 5.010 (2.6 based kernel)

Any feedback would be greatly appreciated
Cheers
jerome


-- 
Auteur cahiers du programmeur J2EE - Eyrolles 2003



---
This SF.net email is sponsored by: Splunk Inc. Do you grep through log files
for problems?  Stop!  Download the new AJAX search engine that makes
searching your log files as easy as surfing the  web.  DOWNLOAD SPLUNK!
http://sel.as-us.falkag.net/sel?cmd=lnkkid3432bid#0486dat1642
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula dell PowerVault 110T LTO2 drive

2006-02-13 Thread Frank Sweetser
On Mon, Feb 13, 2006 at 07:11:40PM +0100, [EMAIL PROTECTED] wrote:
 Hi all,
 I'd like to buy a LTO 2 drive from Dell (110T) it seems to work fine
 on Linux  I'd just want to check that it works with Bacula ? running an
 Ubuntu Linux 5.010 (2.6 based kernel)

Our 132T dual LTO2 works quite happily under FC4.

-- 
Frank Sweetser fs at wpi.edu  |  For every problem, there is a solution that
WPI Network Engineer  |  is simple, elegant, and wrong. - HL Mencken
GPG fingerprint = 6174 1257 129E 0D21 D8D4  E8A3 8E39 29E3 E2E8 8CEC


---
This SF.net email is sponsored by: Splunk Inc. Do you grep through log files
for problems?  Stop!  Download the new AJAX search engine that makes
searching your log files as easy as surfing the  web.  DOWNLOAD SPLUNK!
http://sel.as-us.falkag.net/sel?cmd=lnkkid=103432bid=230486dat=121642
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula dell PowerVault 110T LTO2 drive

2006-02-13 Thread romje

On Mon, February 13, 2006 7:13 pm, Frank Sweetser said:
 On Mon, Feb 13, 2006 at 07:11:40PM +0100, [EMAIL PROTECTED] wrote:
 Hi all,
 I'd like to buy a LTO 2 drive from Dell (110T) it seems to work fine
 on Linux  I'd just want to check that it works with Bacula ? running an
 Ubuntu Linux 5.010 (2.6 based kernel)

 Our 132T dual LTO2 works quite happily under FC4.

thanks frank for this so quick  positive answer
cheers
jerome
PS:
never seen a so quick answer  since 13 years  on the web :)

-- 
Auteur cahiers du programmeur J2EE - Eyrolles 2003



---
This SF.net email is sponsored by: Splunk Inc. Do you grep through log files
for problems?  Stop!  Download the new AJAX search engine that makes
searching your log files as easy as surfing the  web.  DOWNLOAD SPLUNK!
http://sel.as-us.falkag.net/sel?cmd=lnkkid3432bid#0486dat1642
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


RE: [Bacula-users] Backing up PostgreSQL using Bacula and a fifo

2006-02-13 Thread Magnus Hagander
 First, create the directory /var/lib/pgsql/data/dump and 
 /var/lib/pgsql/data/dump/fifo , chown postgres:bacula, chmod 750.
 Ensure that the database user postgres running on the local 
 host has trust access to all databases (no passwords 
 needed). This script also works for backup of remote 
 databases, but ensure that access rights are set properly.

Recommending trust can be a bit on the dangerous side. There are many
ways that are much better - using ident over unix sockets checks the
actual unix user, or using passwords. Only if every user on your machine
can be trusted (that includes your webserver, if you have one..) should
you use trust.


 If you prefer to have a password, you can uncomment the lines
 EXPORT PGPASSWORD=

This is a deprecated way of specifying the password. You should be using
~/.pgpass instead.

Also, do you really need to su to postgres when you run pg_dump? Can't
you just run them as the bacula user - especially if you're using
trust auth?


Other than that, the scripts do look very nice :-) I think i'll steal
them.


//Magnus


---
This SF.net email is sponsored by: Splunk Inc. Do you grep through log files
for problems?  Stop!  Download the new AJAX search engine that makes
searching your log files as easy as surfing the  web.  DOWNLOAD SPLUNK!
http://sel.as-us.falkag.net/sel?cmd=lnkkid3432bid#0486dat1642
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Backing up PostgreSQL using Bacula and a fifo

2006-02-13 Thread Dan Langille
On 13 Feb 2006 at 18:03, Marcel Gsteiger wrote:

 Hi all
 
 After a few hours of troubleshooting, I finally have my PostgreSQL
 database backup running.  My backup job creates separate schema/data
 backups for each database, along with a separate backup of global
 objects. This is much easier and safer to handle than
 all-in-one-file backups. Moreover, my scripts backup the data
 through pipes. So there is no need for additional disk space for large
 database backups. The script automatically determines which databases
 are on the server. My server is a CentOS 4.2 w/SELinux enabled running
 bacula-1.38.5-4 with postgresql backend on a x86_64 dual xeon box with
 a ultrium-3 tape attached. The data is always spooled through a
 separate RAID 0 array (the tape is too fast for my other disks). My
 postgresql is 8.1.2, but my scripts should also work with versions =
 7.3 or perhaps 7.4 . 

You're using a FIFO?  You're the first I know of to use this (doesn't 
mean nobody else is...) so that's good to hear.

 I hope this is useful for somebody else, too.

I've added it to the wiki at:

  http://paramount.ind.wpi.edu/wiki/doku.php?id=faq

If I've messed something up there, you can fix it directly.  Thanks.

-- 
Dan Langille : Software Developer looking for work
my resume: http://www.freebsddiary.org/dan_langille.php




---
This SF.net email is sponsored by: Splunk Inc. Do you grep through log files
for problems?  Stop!  Download the new AJAX search engine that makes
searching your log files as easy as surfing the  web.  DOWNLOAD SPLUNK!
http://sel.as-us.falkag.net/sel?cmd=lnkkid=103432bid=230486dat=121642
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] bacula stop

2006-02-13 Thread Martin Simmons
 On Mon, 13 Feb 2006 15:06:15 +0100, Enrique de la Torre Gordaliza [EMAIL 
 PROTECTED] said:
 
 I'm creating a shutdown script for my UPS software on the bacula server. 
 First 
 of all I stop director, storage and file daemons. Then I stop mysql daemon. 
 But, what could happen if a job is running when I make director stop?
   Should 

If you just kill the daemons, then I think the volume being written could be
left in an inconsistent state relative to the catalog, so it might not work
when you run another job.


 I issue a cancel command to the console before? How can I cancel all the 
 jobs, running and scheduled? 

I don't think there is any built-in way to cancel all running jobs, except by
interactive use of the console (the cancel command requires you to specify the
job/jobid).  You don't need to cancel scheduled jobs, because stopping the
Director will stop them.

__Martin


---
This SF.net email is sponsored by: Splunk Inc. Do you grep through log files
for problems?  Stop!  Download the new AJAX search engine that makes
searching your log files as easy as surfing the  web.  DOWNLOAD SPLUNK!
http://sel.as-us.falkag.net/sel?cmd=lnkkid=103432bid=230486dat=121642
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Volume Status - Options: Differences

2006-02-13 Thread Martin Simmons
 On Mon, 13 Feb 2006 14:43:15 +0100, Christoff Buch [EMAIL PROTECTED] 
 said:
 
 (on the bconsole) by doing: update - Volume parameters - (choosing pool
 and entering media id), then choosing Volume Status you can get to:
 
 Possible Values are:
  1: Append
  2: Archive
  3: Disabled
  4: Full
  5: Used
  6: Cleaning
  7: Read-Only
 
 Question is:
 
 What are the exact differences between options:
 
 2: Archive, 3: Disabled and 7: Read-Only???
 
 Maybe I am stupid, but I couldn't work it out of the manual...
 Thing is, we have to take a monthly tape set out of the backup-cycle in
 order to store it in a safe (internal company-made rules and stuff...),
 but can't decide on a option to set the tapes to.
 Would be very happy if anyone can help!

AFAIK, there is no current difference between Archive, Disabled and Read-Only
because they refer to unimplemented features of Bacula.

However, bscan marks volumes as Archive to keep them from being re-written
according to a comment in the code, so it it probably safe to use Archive for
your monthly tapes in the current version.

__Martin


---
This SF.net email is sponsored by: Splunk Inc. Do you grep through log files
for problems?  Stop!  Download the new AJAX search engine that makes
searching your log files as easy as surfing the  web.  DOWNLOAD SPLUNK!
http://sel.as-us.falkag.net/sel?cmd=lnkkid=103432bid=230486dat=121642
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] How to get mount requests before the job runs?

2006-02-13 Thread Guy Zuercher
I did something like this which works for me. Of course you should adopt 
the jobname and email in order to try with copy/paste.


echo Please insert tape: `echo 'status dir' | bconsole  tmp1 ; head 
-n \`grep -n 'Running Jobs:' tmp1 | cut -d: -f1\` tmp1 | grep -o 
'YOUR-BACKUP-JOB-NAME.*' | cut -d' ' -f2` | mail -s 'Tape mount request' 
backup-operator.example.com ; rm tmp1


Unfortunatley i have not enough time for the multiline regexp. Otherwise 
i would do a small python script...


Guy



Martin Simmons wrote:

On Fri, 10 Feb 2006 17:51:47 +, Chris Dennis [EMAIL PROTECTED] said:

I want the operator to be asked to mount KW001 (if it's not mounted
already) as soon as the previous job has finished.

If this is possible, can anyone point me to the relevant bit of the
documentation?


There is no neat way to do this, but you can make a cron job that runs
something like this:

echo 'status dir' | \
  bconsole -c bconsole.conf | \
  mail -s 'Please examine Scheduled Jobs' [EMAIL PROTECTED]

__Martin




---
This SF.net email is sponsored by: Splunk Inc. Do you grep through log files
for problems?  Stop!  Download the new AJAX search engine that makes
searching your log files as easy as surfing the  web.  DOWNLOAD SPLUNK!
http://sel.as-us.falkag.net/sel?cmd=lnkkid=103432bid=230486dat=121642
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] How to get mount requests before the job runs?

2006-02-13 Thread Chris Dennis

Thomas Glatthor wrote:

Hi Chris,

you can use a cron-job, which mails the output of this script



#!bin/bash
./bconsole -c ./bconsole.conf END_OF_DATA
status dir
quit
END_OF_DATA


OK, thanks for that.

Has anyone written a script to parse the output of 'status dir' to
extract the (likely) next volume?  If not, I'll have a go myself.

Next question:  I can't find a way to ask Which tape is currently in
the drive?  Is that possible?  If it is, I could compare the answer
with the output from 'status dir' and send another urgent email if they
don't match.

cheers

Chris



---
This SF.net email is sponsored by: Splunk Inc. Do you grep through log files
for problems?  Stop!  Download the new AJAX search engine that makes
searching your log files as easy as surfing the  web.  DOWNLOAD SPLUNK!
http://sel.as-us.falkag.net/sel?cmd=lnkkid=103432bid=230486dat=121642
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] How to get mount requests before the job runs?

2006-02-13 Thread Brian A. Seklecki
On Mon, 2006-02-13 at 23:57 +, Chris Dennis wrote:
 Thomas Glatthor wrote:
  Hi Chris,
  
  you can use a cron-job, which mails the output of this script
  
  
 #!bin/bash
 ./bconsole -c ./bconsole.conf END_OF_DATA
 status dir
 quit
 END_OF_DATA
 
 OK, thanks for that.
 

echo -n status dir | bconsole -s

...also works.

# echo -n status dir | bconsole -s 21|egrep ^Incremental|^Full|awk
'{print $1 $4 $5}'


Incremental 13-Feb-06 22:00
Incremental 13-Feb-06 22:00


further pipe to | senmdmail [EMAIL PROTECTED]

~lava

 Has anyone written a script to parse the output of 'status dir' to
 extract the (likely) next volume?  If not, I'll have a go myself.
 
 Next question:  I can't find a way to ask Which tape is currently in
 the drive?  Is that possible?  If it is, I could compare the answer
 with the output from 'status dir' and send another urgent email if they
 don't match.
 
 cheers
 
 Chris
 
 
 
 ---
 This SF.net email is sponsored by: Splunk Inc. Do you grep through log files
 for problems?  Stop!  Download the new AJAX search engine that makes
 searching your log files as easy as surfing the  web.  DOWNLOAD SPLUNK!
 http://sel.as-us.falkag.net/sel?cmd=lnkkid=103432bid=230486dat=121642
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users


smime.p7s
Description: S/MIME cryptographic signature


[Bacula-users] Disable upgrade of Incremental to Full?

2006-02-13 Thread Brian A. Seklecki
Is there any way to disable the auto-upgrade option of Incremental/
Differential to Full or stipulate that an Incremental / Differential job
failure condition occurs when no Full backup exists (instead of
promotion)?

~lava


smime.p7s
Description: S/MIME cryptographic signature


Re: [Bacula-users] Disable upgrade of Incremental to Full?

2006-02-13 Thread Dan Langille
On 13 Feb 2006 at 19:41, Brian A. Seklecki wrote:

 Is there any way to disable the auto-upgrade option of Incremental/
 Differential to Full or stipulate that an Incremental / Differential job
 failure condition occurs when no Full backup exists (instead of
 promotion)?

Yes, I think I read something in the mailing list recently, but 
cannot recall the details.

-- 
Dan Langille : Software Developer looking for work
my resume: http://www.freebsddiary.org/dan_langille.php




---
This SF.net email is sponsored by: Splunk Inc. Do you grep through log files
for problems?  Stop!  Download the new AJAX search engine that makes
searching your log files as easy as surfing the  web.  DOWNLOAD SPLUNK!
http://sel.as-us.falkag.net/sel?cmd=lnkkid=103432bid=230486dat=121642
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Disable upgrade of Incremental to Full?

2006-02-13 Thread Ryan Novosielski

If I remember correctly, it's right in the manual.

Dan Langille wrote:


On 13 Feb 2006 at 19:41, Brian A. Seklecki wrote:

 


Is there any way to disable the auto-upgrade option of Incremental/
Differential to Full or stipulate that an Incremental / Differential job
failure condition occurs when no Full backup exists (instead of
promotion)?
   



Yes, I think I read something in the mailing list recently, but 
cannot recall the details.


 




---
This SF.net email is sponsored by: Splunk Inc. Do you grep through log files
for problems?  Stop!  Download the new AJAX search engine that makes
searching your log files as easy as surfing the  web.  DOWNLOAD SPLUNK!
http://sel.as-us.falkag.net/sel?cmd=lnkkid=103432bid=230486dat=121642
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Re: Disable upgrade of Incremental to Full?

2006-02-13 Thread Jens Boettge
Brian A. Seklecki wrote:
 Is there any way to disable the auto-upgrade option of Incremental/
 Differential to Full or stipulate that an Incremental / Differential job
 failure condition occurs when no Full backup exists (instead of
 promotion)?

Why do You want to do that?
If You don't have a basic full backup, the incremental will contain all
files and so You will get an incremental which is in reality a full.
I do not understand, what you intend thereby.

Regards,
Jens



---
This SF.net email is sponsored by: Splunk Inc. Do you grep through log files
for problems?  Stop!  Download the new AJAX search engine that makes
searching your log files as easy as surfing the  web.  DOWNLOAD SPLUNK!
http://sel.as-us.falkag.net/sel?cmd=lnkkid=103432bid=230486dat=121642
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Re: Disable upgrade of Incremental to Full?

2006-02-13 Thread Gregory Orange

Jens Boettge wrote:

Brian A. Seklecki wrote:

Is there any way to disable the auto-upgrade option of Incremental/
Differential to Full or stipulate that an Incremental / Differential job
failure condition occurs when no Full backup exists (instead of
promotion)?


Why do You want to do that?
If You don't have a basic full backup, the incremental will contain all
files and so You will get an incremental which is in reality a full.
I do not understand, what you intend thereby.


It would seem he doesn't want it to write anything when it can't find a 
full to base the incremental upon: he wants it to fail and give an 
appropriate error message.


Cheers,
Greg.


---
This SF.net email is sponsored by: Splunk Inc. Do you grep through log files
for problems?  Stop!  Download the new AJAX search engine that makes
searching your log files as easy as surfing the  web.  DOWNLOAD SPLUNK!
http://sel.as-us.falkag.net/sel?cmd=lnkkid=103432bid=230486dat=121642
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Disable upgrade of Incremental to Full?

2006-02-13 Thread Brian A. Seklecki
I think we might be talking about:

Rerun Failed Levels = yes|no

If this directive is set to yes (default
no), and Bacula detects that a previous job at a higher level (i.e.
Full or Dierential) has failed, the current job level will be upgraded
to the higher level. This is particularly useful for Laptops where they
may often be unreachable, and if a prior Full save has failed, you wish
the very next backup to be a Full save rather than whatever level it is
started as.

Obviously there's some ambiguity here.  Because the default behavior
seems to be Yes, program-wide.

Actually, that would be true if it red: If Bacula detects that a
previous job at a higher level has not occurred, or has occurred but is
on a tape that was pruned/purged.

What option were you guys thinking?

There are 30 instances of the word Upgrade in the manual; but only 6
are in relation to Backup Levels.  Perhaps we need to use the work
Promotion or another synonym ?

~lava

On Mon, 2006-02-13 at 22:26, Ryan Novosielski wrote:
 If I remember correctly, it's right in the manual.
 
 Dan Langille wrote:
 
 On 13 Feb 2006 at 19:41, Brian A. Seklecki wrote:
 
   
 
 Is there any way to disable the auto-upgrade option of Incremental/
 Differential to Full or stipulate that an Incremental / Differential job
 failure condition occurs when no Full backup exists (instead of
 promotion)?
 
 
 
 Yes, I think I read something in the mailing list recently, but 
 cannot recall the details.
 
   
 
 
 
 ---
 This SF.net email is sponsored by: Splunk Inc. Do you grep through log files
 for problems?  Stop!  Download the new AJAX search engine that makes
 searching your log files as easy as surfing the  web.  DOWNLOAD SPLUNK!
 http://sel.as-us.falkag.net/sel?cmd=lnkkid=103432bid=230486dat=121642
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users



---
This SF.net email is sponsored by: Splunk Inc. Do you grep through log files
for problems?  Stop!  Download the new AJAX search engine that makes
searching your log files as easy as surfing the  web.  DOWNLOAD SPLUNK!
http://sel.as-us.falkag.net/sel?cmd=lnkkid=103432bid=230486dat=121642
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users