[Bacula-users] Using another storage to make a restore.

2011-01-06 Thread Cedric Devillers
Hello list,

I would like to tell you about a problem we're having after upgrading
bacula 2.2.6 to 5.0.3.

In our setup we have the main bacula server (director and storage) with
a LTO3 tape drive. All jobs are done on this server using his local
storage (storage1). The is also another storage daemon running on
another server and configured in the director (storage2), but not used
for backup jobs.

The second storage (storage2) is only used for restores or in case of
problems with the main bacula server.

Both storage daemon have the same media type (LTO3).

So to make a restore with 2.2.6, we mount the tape on the second storage
(storage2) then connect to bconsole and launch the restore wizard. At
the end of the wizard we choose storage2 as the storage ressources for
the job en everything was fine.

But now when the restore job is launched, it will always use the first
storage (storage1) and fail if the tape is not mounted on that drive.

It seems like a bug to me so i've browsed the bug database and found
some contradictory informations :
Here http://bugs.bacula.org/view.php?id=1618 , kerns says that bacula
will always restores from the same storage daemon where the backup was
made. If thats true now it was not the case before (because it worked in
our setup).
But here http://bugs.bacula.org/view.php?id=1579 ebollengier seems to
think that it should work (maybe not recommended).

So my question is : is this a bug in 5.0.x ? Is using one storage daemon
for backup and another for restore supported at all with bacula ?
If i use different mediatypes for each storage deamon, will i be able to
use one for backup and the other for restore ?

--
Learn how Oracle Real Application Clusters (RAC) One Node allows customers
to consolidate database storage, standardize their database environment, and, 
should the need arise, upgrade to a full multi-node Oracle RAC database 
without downtime or disruption
http://p.sf.net/sfu/oracle-sfdevnl
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Fibre Channel drives keep switching

2008-11-20 Thread Cedric Devillers
Alan Brown wrote:
 On Wed, 19 Nov 2008, Robert LeBlanc wrote:
 
 You can write a udev rule to lock the drives down. I wrote one awhile
 back to keep the changer device the same.

 dev6 ~ # cat /etc/udev/rules.d/55-bacula.rules
 SUBSYSTEM==scsi,ATTRS{vendor}==EXABYTE*,ATTRS{type}==8,
 SYMLINK+=autochanger1 changer
 
 For the drives you're better off creating udev rules to create something
 like /dev/scsi/ntape/{drive-WWID} and /dev/scsi/generics/{WWID}
 
 These won't change no matter where on the scsi/fabric the drive is.
 
 I've had a request ticket in with Redhat to implement this on RHEL4/5 for
 a long time, ditto with the /dev/sg/ devices for the same reason.
 
 (Not to mention a strong wish for multipath support for tapes and
 generics...)
 

Well with the latest redhat/centos 5.2 (i believe it was backported to 4
too) you can use /dev/tape/by-id/scsi-XX-nst that should be fixed by
using specific id (instead of XX).

The only problem is that mt still use /dev/tape by default...

-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK  win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100url=/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Fibre Channel drives keep switching

2008-11-20 Thread Cedric Devillers
Alan Brown wrote:
 On Thu, 20 Nov 2008, Cedric Devillers wrote:
 
 too) you can use /dev/tape/by-id/scsi-XX-nst that should be fixed by
 using specific id (instead of XX).
 
 Looking at that directory it's only been created for the first tape drive.
 
 The others have not been picked up.
 

Well, seems like we need another bug report for Redhat :)


 The only problem is that mt still use /dev/tape by default...
 
 Mt isn't normally used for day-to-day operations...
 

I agree, but i had some scripts using mt (that was probably a bad idea)
broken because of this.

-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK  win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100url=/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Backup catalog not running.

2007-11-20 Thread Cedric Devillers
Hello,

This is for the records of the mailing-list.

I've found that the problem was indeed with Max Wait Time or Max
Start Delay. I've set them up to respectively 4 and 8 hours, and the
backupcatalog is working fine.

I suppose the strange part (no logging at all, even in trace file) may
be considered as a bug, but i've only seen this on a old 1.38 release.

Have a good day.


Cedric Devillers wrote:
 Cedric Devillers wrote:
 Arno Lehmann wrote:
 Hi,

 13.11.2007 12:54,, Cedric Devillers wrote::
 Hello,

 I have a little problem with one of our bacula installation.

 Let me explain the setup first.

 There is two server, the first has the data and the storage daemon
 (meia). The second is the director/DB server (lucita). There is alsa two
 client only servers (hr-accentv2 a windows client and darla).
 Ok. The catalog database is on lucita, right?

 That's right.


 All the jobs are running fine, except the Catalog Backup. The strange
 thing here is that i have nothing in logs about it. If i run it
 manually, it is fine.

 Director ans storage version : 1.38.11 (can't upgrade right now).
 You should plan for that, though :-)

 It is planned yes, i have to backport the packages :)


 I suppose i have messed something with scheduling or concurency, but i
 can't find what.
 I hope I can...

 ...
 Here is the relevant part of my config :

 ### Jobs definitions :

 JobDefs {
   Name = DefaultJob
   Type = Backup
   Level = Incremental
   Client = lucita-fd
   FileSet = Full Set
   Schedule = WeeklyCycle
   Storage = meia-sd
   Messages = Standard
   Pool = Default
   Priority = 10
 }

 JobDefs {
   Name = Daily
   Type = Backup
   Level = Differential
   Client = meia-fd
   FileSet = Full Set
   Schedule = DailyCycle
   Storage = meia-sd
   Messages = Standard
   Pool = Default#overwrited by schedule config, but needed to start
 bacula
   Max Wait Time = 1 hours
   Max Start Delay = 4 hours
   RunBeforeJob = etc/bacula/before.sh
   Priority = 10
 }

 JobDefs {
   Name = Weekly
   Type = Backup
   Level = Full
   Client = meia-fd
   FileSet = Full Set
   Schedule = WeeklyCycle
   Storage = meia-sd
   Messages = Standard
   Pool = Default#overwrited by schedule config, but needed to
 start bacula
   Max Wait Time = 1 hours
   Max Start Delay = 4 hours
   RunBeforeJob = etc/bacula/before.sh
   Priority = 10
 }



 Job {
   Name = Daily-meia
   JobDefs = Daily
   Write Bootstrap = /var/bacula/incremental.bsr
 }

 Job {
   Name = Weekly-meia
   JobDefs = Weekly
   Write Bootstrap = /var/bacula/full.bsr
 }

 Job {
   Name = DARLABackup
   JobDefs = Weekly
   Client = darla-fd
   FileSet=DARLA
   Schedule = DARLACycle
   Max Wait Time = 1 hours
   Max Start Delay = 4 hours
   RunBeforeJob = /etc/bacula/before.sh
   Write Bootstrap = /var/bacula/darla.bsr
 }


 Job {
   Name = HRBackup
   Client = hr-accentv2-fd
   JobDefs = Daily
   Level = Full
   FileSet = HRSet
   Schedule = HRSchedule
   Max Wait Time = 1 hours
   Max Start Delay = 4 hours
   Write Bootstrap = /var/bacula/hraccent.bsr
   Priority = 11   # run after main backup
 }

 #
 # Backup the catalog database (after the nightly save)
 Job {
   Name = BackupCatalog
   JobDefs = Weekly
   Level = Full
   FileSet=Catalog
   Client = lucita-fd
 Ok. This is looking right.

   Schedule = WeeklyCycleAfterBackup
   # This creates an ASCII copy of the catalog
   RunBeforeJob = /etc/bacula/scripts/make_catalog_backup bacula bacula
 Ya2AhGho
   RunBeforeJob = /etc/bacula/before.sh
   # This deletes the copy of the catalog
   RunAfterJob  = /etc/bacula/scripts/delete_catalog_backup
   RunAfterJob = /etc/bacula/after.sh
   RunAfterJob = ssh -i /etc/bacula/Bacula_key [EMAIL PROTECTED]
 I *believe* that 1.38 could only handle one Run After Job and Run
 Before Job option per job. See below how to verify this.

   Write Bootstrap = /var/lib/bacula/BackupCatalog.bsr
   Priority = 11   # run after main backup
 }
 In bconsole, use the command show jobs=BackupCatalog. Search for the
 lines with the Run before/after Job commands.

 If you only see one each, you'll have to put the commands you need to
 execute into one script, and then reference that script.
 You make the point here, multiple runbefore and runafter directive are
 not supported on this version.

 I hope it is supported on 2.2.5, because all my other setup use this :)
 (i checked of course, it is supported).

 I've made the changes and see if the planned backup of tonight run fine.

 But one thing i don't understand is why i don't have anything about this
 job in my logs. And also the fact that manually running the job is
 working fine. Of course, the different runbefore and runafter scripts
 where not running, but the job was executed without issuing any errors.

 I'm wondering if there is not a problem with my Max Wait Time and Max
 Start Delay directive. But as far i undrestand thems, they should be good.

 
 
 Ok, the runbefore runafter scripts are working fine now

Re: [Bacula-users] Backup catalog not running.

2007-11-15 Thread Cedric Devillers
Cedric Devillers wrote:
 Arno Lehmann wrote:
 Hi,

 13.11.2007 12:54,, Cedric Devillers wrote::
 Hello,

 I have a little problem with one of our bacula installation.

 Let me explain the setup first.

 There is two server, the first has the data and the storage daemon
 (meia). The second is the director/DB server (lucita). There is alsa two
 client only servers (hr-accentv2 a windows client and darla).
 Ok. The catalog database is on lucita, right?
 
 
 That's right.
 
 
 All the jobs are running fine, except the Catalog Backup. The strange
 thing here is that i have nothing in logs about it. If i run it
 manually, it is fine.

 Director ans storage version : 1.38.11 (can't upgrade right now).
 You should plan for that, though :-)
 
 
 It is planned yes, i have to backport the packages :)
 
 
 I suppose i have messed something with scheduling or concurency, but i
 can't find what.
 I hope I can...

 ...
 Here is the relevant part of my config :

 ### Jobs definitions :

 JobDefs {
   Name = DefaultJob
   Type = Backup
   Level = Incremental
   Client = lucita-fd
   FileSet = Full Set
   Schedule = WeeklyCycle
   Storage = meia-sd
   Messages = Standard
   Pool = Default
   Priority = 10
 }

 JobDefs {
   Name = Daily
   Type = Backup
   Level = Differential
   Client = meia-fd
   FileSet = Full Set
   Schedule = DailyCycle
   Storage = meia-sd
   Messages = Standard
   Pool = Default#overwrited by schedule config, but needed to start
 bacula
   Max Wait Time = 1 hours
   Max Start Delay = 4 hours
   RunBeforeJob = etc/bacula/before.sh
   Priority = 10
 }

 JobDefs {
   Name = Weekly
   Type = Backup
   Level = Full
   Client = meia-fd
   FileSet = Full Set
   Schedule = WeeklyCycle
   Storage = meia-sd
   Messages = Standard
   Pool = Default#overwrited by schedule config, but needed to
 start bacula
   Max Wait Time = 1 hours
   Max Start Delay = 4 hours
   RunBeforeJob = etc/bacula/before.sh
   Priority = 10
 }



 Job {
   Name = Daily-meia
   JobDefs = Daily
   Write Bootstrap = /var/bacula/incremental.bsr
 }

 Job {
   Name = Weekly-meia
   JobDefs = Weekly
   Write Bootstrap = /var/bacula/full.bsr
 }

 Job {
   Name = DARLABackup
   JobDefs = Weekly
   Client = darla-fd
   FileSet=DARLA
   Schedule = DARLACycle
   Max Wait Time = 1 hours
   Max Start Delay = 4 hours
   RunBeforeJob = /etc/bacula/before.sh
   Write Bootstrap = /var/bacula/darla.bsr
 }


 Job {
   Name = HRBackup
   Client = hr-accentv2-fd
   JobDefs = Daily
   Level = Full
   FileSet = HRSet
   Schedule = HRSchedule
   Max Wait Time = 1 hours
   Max Start Delay = 4 hours
   Write Bootstrap = /var/bacula/hraccent.bsr
   Priority = 11   # run after main backup
 }

 #
 # Backup the catalog database (after the nightly save)
 Job {
   Name = BackupCatalog
   JobDefs = Weekly
   Level = Full
   FileSet=Catalog
   Client = lucita-fd
 Ok. This is looking right.

   Schedule = WeeklyCycleAfterBackup
   # This creates an ASCII copy of the catalog
   RunBeforeJob = /etc/bacula/scripts/make_catalog_backup bacula bacula
 Ya2AhGho
   RunBeforeJob = /etc/bacula/before.sh
   # This deletes the copy of the catalog
   RunAfterJob  = /etc/bacula/scripts/delete_catalog_backup
   RunAfterJob = /etc/bacula/after.sh
   RunAfterJob = ssh -i /etc/bacula/Bacula_key [EMAIL PROTECTED]
 I *believe* that 1.38 could only handle one Run After Job and Run
 Before Job option per job. See below how to verify this.

   Write Bootstrap = /var/lib/bacula/BackupCatalog.bsr
   Priority = 11   # run after main backup
 }
 In bconsole, use the command show jobs=BackupCatalog. Search for the
 lines with the Run before/after Job commands.

 If you only see one each, you'll have to put the commands you need to
 execute into one script, and then reference that script.
 
 You make the point here, multiple runbefore and runafter directive are
 not supported on this version.
 
 I hope it is supported on 2.2.5, because all my other setup use this :)
 (i checked of course, it is supported).
 
 I've made the changes and see if the planned backup of tonight run fine.
 
 But one thing i don't understand is why i don't have anything about this
 job in my logs. And also the fact that manually running the job is
 working fine. Of course, the different runbefore and runafter scripts
 where not running, but the job was executed without issuing any errors.
 
 I'm wondering if there is not a problem with my Max Wait Time and Max
 Start Delay directive. But as far i undrestand thems, they should be good.
 


Ok, the runbefore runafter scripts are working fine now.

But i still have the exact same problem as before. The job is showed as
canceled in bconsole, but there is absolutely nothing in the logs about it.

I have turned trace on and set debug level 200 (maybe a little high ?)
and i'll see if i can catch some informations.


 By the way: If you posted the real password to the catalog above
 you'll want to change that soon :-)
 
 
 I've noticed

[Bacula-users] Backup catalog not running.

2007-11-13 Thread Cedric Devillers
Hello,

I have a little problem with one of our bacula installation.

Let me explain the setup first.

There is two server, the first has the data and the storage daemon
(meia). The second is the director/DB server (lucita). There is alsa two
client only servers (hr-accentv2 a windows client and darla).

All the jobs are running fine, except the Catalog Backup. The strange
thing here is that i have nothing in logs about it. If i run it
manually, it is fine.

Director ans storage version : 1.38.11 (can't upgrade right now).

I suppose i have messed something with scheduling or concurency, but i
can't find what.


The bconsole output i got is this :

   230  Diff 29,702  4,628,541,420 OK   12-nov-07 20:14 Daily-meia
   231  Full 30  2,229,039,740 OK   12-nov-07 22:10 HRBackup
   232  Full   1,690,498 36,178,232,833 OK   13-nov-07 02:25 DARLABackup
   233  Full  0  0 Cancel   13-nov-07 02:25
BackupCatalog






and the last log lines are :


2-nov 22:55 lucita-dir: RunBefore: Connecting to Director
lucita.int.ccib.be:9101
12-nov 22:55 lucita-dir: RunBefore: 1000 OK: lucita-dir Version: 1.38.11
(28 June 2006)
12-nov 22:55 lucita-dir: RunBefore: Enter a period to cancel a command.
12-nov 22:55 lucita-dir: RunBefore: mount storage=meia-sd
12-nov 22:55 lucita-dir: RunBefore: Using default Catalog name=MyCatalog
DB=bacula
12-nov 22:55 lucita-dir: RunBefore: 3001 Mounted Volume: Lundi
12-nov 22:55 lucita-dir: RunBefore: 3001 Device Ultrium (/dev/nst0) is
already mounted with Volume Lundi
12-nov 22:55 lucita-dir: RunBefore: You have messages.
12-nov 22:55 lucita-dir: Start Backup JobId 232,
Job=DARLABackup.2007-11-12_22.55.00
12-nov 22:53 meia-sd: Volume Lundi previously written, moving to end
of data.
12-nov 22:54 meia-sd: Ready to append to end of Volume Lundi at file=8.
13-nov 02:25 lucita-dir: Bacula 1.38.11 (28Jun06): 13-nov-2007 02:25:52
  JobId:  232
  Job:DARLABackup.2007-11-12_22.55.00
  Backup Level:   Full
  Client: darla-fd i486-pc-linux-gnu,debian,4.0
  FileSet:DARLA 2007-08-20 19:14:46
  Pool:   Lundi
  Storage:meia-sd
  Scheduled time: 12-nov-2007 22:55:00
  Start time: 12-nov-2007 22:55:10
  End time:   13-nov-2007 02:25:52
  Elapsed time:   3 hours 30 mins 42 secs
  Priority:   10
  FD Files Written:   1,690,498
  SD Files Written:   1,690,498
  FD Bytes Written:   36,178,232,833 (36.17 GB)
  SD Bytes Written:   36,405,455,971 (36.40 GB)
  Rate:   2861,7 KB/s
  Software Compression:   None
  Volume name(s): Lundi
  Volume Session Id:  9
  Volume Session Time:1194515418
  Last Volume Bytes:  43,358,284,951 (43.35 GB)
  Non-fatal FD errors:0
  SD Errors:  0
  FD termination status:  OK
  SD termination status:  OK
  Termination:Backup OK

13-nov 02:25 lucita-dir: Begin pruning Jobs.
13-nov 02:25 lucita-dir: No Jobs found to prune.
13-nov 02:25 lucita-dir: Begin pruning Files.
13-nov 02:25 lucita-dir: Pruned Files from 1 Jobs for client darla-fd
from catalog.
13-nov 02:25 lucita-dir: End auto prune.







Here is the relevant part of my config :

### Jobs definitions :

JobDefs {
  Name = DefaultJob
  Type = Backup
  Level = Incremental
  Client = lucita-fd
  FileSet = Full Set
  Schedule = WeeklyCycle
  Storage = meia-sd
  Messages = Standard
  Pool = Default
  Priority = 10
}

JobDefs {
  Name = Daily
  Type = Backup
  Level = Differential
  Client = meia-fd
  FileSet = Full Set
  Schedule = DailyCycle
  Storage = meia-sd
  Messages = Standard
  Pool = Default#overwrited by schedule config, but needed to start
bacula
  Max Wait Time = 1 hours
  Max Start Delay = 4 hours
  RunBeforeJob = etc/bacula/before.sh
  Priority = 10
}

JobDefs {
  Name = Weekly
  Type = Backup
  Level = Full
  Client = meia-fd
  FileSet = Full Set
  Schedule = WeeklyCycle
  Storage = meia-sd
  Messages = Standard
  Pool = Default#overwrited by schedule config, but needed to
start bacula
  Max Wait Time = 1 hours
  Max Start Delay = 4 hours
  RunBeforeJob = etc/bacula/before.sh
  Priority = 10
}



Job {
  Name = Daily-meia
  JobDefs = Daily
  Write Bootstrap = /var/bacula/incremental.bsr
}

Job {
  Name = Weekly-meia
  JobDefs = Weekly
  Write Bootstrap = /var/bacula/full.bsr
}

Job {
  Name = DARLABackup
  JobDefs = Weekly
  Client = darla-fd
  FileSet=DARLA
  Schedule = DARLACycle
  Max Wait Time = 1 hours
  Max Start Delay = 4 hours
  RunBeforeJob = /etc/bacula/before.sh
  Write Bootstrap = /var/bacula/darla.bsr
}


Job {
  Name = HRBackup
  Client = hr-accentv2-fd
  JobDefs = Daily
  Level = Full
  FileSet = HRSet
  Schedule = HRSchedule
  Max Wait Time = 1 hours
  Max Start Delay = 4 hours
  Write Bootstrap = /var/bacula/hraccent.bsr
  Priority = 11   # run after main backup
}

#
# Backup the catalog database 

Re: [Bacula-users] Backup catalog not running.

2007-11-13 Thread Cedric Devillers
Arno Lehmann wrote:
 Hi,

 13.11.2007 12:54,, Cedric Devillers wrote::
 Hello,

 I have a little problem with one of our bacula installation.

 Let me explain the setup first.

 There is two server, the first has the data and the storage daemon
 (meia). The second is the director/DB server (lucita). There is alsa two
 client only servers (hr-accentv2 a windows client and darla).

 Ok. The catalog database is on lucita, right?


That's right.


 All the jobs are running fine, except the Catalog Backup. The strange
 thing here is that i have nothing in logs about it. If i run it
 manually, it is fine.

 Director ans storage version : 1.38.11 (can't upgrade right now).

 You should plan for that, though :-)


It is planned yes, i have to backport the packages :)


 I suppose i have messed something with scheduling or concurency, but i
 can't find what.

 I hope I can...

 ...
 Here is the relevant part of my config :

 ### Jobs definitions :

 JobDefs {
   Name = DefaultJob
   Type = Backup
   Level = Incremental
   Client = lucita-fd
   FileSet = Full Set
   Schedule = WeeklyCycle
   Storage = meia-sd
   Messages = Standard
   Pool = Default
   Priority = 10
 }

 JobDefs {
   Name = Daily
   Type = Backup
   Level = Differential
   Client = meia-fd
   FileSet = Full Set
   Schedule = DailyCycle
   Storage = meia-sd
   Messages = Standard
   Pool = Default#overwrited by schedule config, but needed to start
 bacula
   Max Wait Time = 1 hours
   Max Start Delay = 4 hours
   RunBeforeJob = etc/bacula/before.sh
   Priority = 10
 }

 JobDefs {
   Name = Weekly
   Type = Backup
   Level = Full
   Client = meia-fd
   FileSet = Full Set
   Schedule = WeeklyCycle
   Storage = meia-sd
   Messages = Standard
   Pool = Default#overwrited by schedule config, but needed to
 start bacula
   Max Wait Time = 1 hours
   Max Start Delay = 4 hours
   RunBeforeJob = etc/bacula/before.sh
   Priority = 10
 }



 Job {
   Name = Daily-meia
   JobDefs = Daily
   Write Bootstrap = /var/bacula/incremental.bsr
 }

 Job {
   Name = Weekly-meia
   JobDefs = Weekly
   Write Bootstrap = /var/bacula/full.bsr
 }

 Job {
   Name = DARLABackup
   JobDefs = Weekly
   Client = darla-fd
   FileSet=DARLA
   Schedule = DARLACycle
   Max Wait Time = 1 hours
   Max Start Delay = 4 hours
   RunBeforeJob = /etc/bacula/before.sh
   Write Bootstrap = /var/bacula/darla.bsr
 }


 Job {
   Name = HRBackup
   Client = hr-accentv2-fd
   JobDefs = Daily
   Level = Full
   FileSet = HRSet
   Schedule = HRSchedule
   Max Wait Time = 1 hours
   Max Start Delay = 4 hours
   Write Bootstrap = /var/bacula/hraccent.bsr
   Priority = 11   # run after main backup
 }

 #
 # Backup the catalog database (after the nightly save)
 Job {
   Name = BackupCatalog
   JobDefs = Weekly
   Level = Full
   FileSet=Catalog
   Client = lucita-fd

 Ok. This is looking right.

   Schedule = WeeklyCycleAfterBackup
   # This creates an ASCII copy of the catalog
   RunBeforeJob = /etc/bacula/scripts/make_catalog_backup bacula bacula
 Ya2AhGho
   RunBeforeJob = /etc/bacula/before.sh
   # This deletes the copy of the catalog
   RunAfterJob  = /etc/bacula/scripts/delete_catalog_backup
   RunAfterJob = /etc/bacula/after.sh
   RunAfterJob = ssh -i /etc/bacula/Bacula_key [EMAIL PROTECTED]

 I *believe* that 1.38 could only handle one Run After Job and Run
 Before Job option per job. See below how to verify this.

   Write Bootstrap = /var/lib/bacula/BackupCatalog.bsr
   Priority = 11   # run after main backup
 }

 In bconsole, use the command show jobs=BackupCatalog. Search for the
 lines with the Run before/after Job commands.

 If you only see one each, you'll have to put the commands you need to
 execute into one script, and then reference that script.

You make the point here, multiple runbefore and runafter directive are
not supported on this version.

I hope it is supported on 2.2.5, because all my other setup use this :)
(i checked of course, it is supported).

I've made the changes and see if the planned backup of tonight run fine.

But one thing i don't understand is why i don't have anything about this
job in my logs. And also the fact that manually running the job is
working fine. Of course, the different runbefore and runafter scripts
where not running, but the job was executed without issuing any errors.

I'm wondering if there is not a problem with my Max Wait Time and Max
Start Delay directive. But as far i undrestand thems, they should be good.


 By the way: If you posted the real password to the catalog above
 you'll want to change that soon :-)


I've noticed that right after i compulsively clicked the send button :)

 Hope that helps,

It is helping a lot, thanks for your time.

 Arno





-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX

Re: [Bacula-users] dbcheck slowness

2007-09-19 Thread Cedric Devillers
Martin Simmons wrote:
 On Wed, 19 Sep 2007 11:54:37 +0200, Cousin Marc said:
 I think the problem is linked to the fact dbcheck works more or less row by 
 row.

 If I understand correctly, the problem is that you have duplicates in the 
 path 
 table as the error comes from 
 SELECT PathId FROM Path WHERE Path='%s' returning more than one row

 You could try this query, it would probably be much faster :

 delete from path 
 where pathid not in (
  select min(pathid) from path 
  where path in 
  (select path from path group by path having count(*) 1) 
  group by path) 
 and path in (
  select path from path group by path having count(*) 1);

 I've just done it very quickly and haven't had time to doublecheck, so make 
 a 
 backup before if you want to try it... :)
 Or at least do it in a transaction so you can rollback if anything goes 
 wrong.
 
 Deleting from path like that could leave the catalog in a worse state than
 before, with dangling references in the File table.  The dbcheck routine
 updates the File table to replace references to deleted pathids.
 
 Moreover, if deleting duplicate pathids is slow (i.e. there are many of them),
 then the catalog could be badly corrupted, so I don't see how you can be sure
 that the File records are accurate.  It might be better to wipe the catalog
 and start again, or at least prune all of the file records before running
 dbcheck.
 
 __Martin

I think that the approach marc suggested may not be that bad in my case.

Taking a closer look at duplicate paths show that it is not conflicting
PathID, but 2 rows of the same PathID.

Here is an example :

restorebacula=# SELECT PathId FROM Path WHERE
Path='/home/tbeverdam/Maildir/';
 pathid

  12251
  12251
(2 rows)


So i suppose that deleting one of these entries should not put the
catalog in a corrupted state (correct me if i'm wrong).

I'll try this and do some test with my testing catalog.


-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] dbcheck slowness

2007-09-17 Thread Cedric Devillers
Hello,

I must run dbcheck on my bacula database (postgresql backend) because of
 sql_create.c:767 More than one Path! warning on every file backed up
(don't know where it comes from, actually).

The operation is extremely slow, it ran for the all week-end and only
deleted 2000 from the 51000 duplicate records.

I've had already added the suggested indexes, but no differences. The
counting of duplicate records is not slow at all (one or two minutes),
only deleting.

I know that the topic has been discussed before, but i don't find any
solution in archives. I must say i'm not a database expert (at all), so
the postgresql setup is almost the default one. I had also read that
dbcheck is probably not optimized for big databases, but what else can i
use to get rid of thoses More than one Path! warnings ?

Any help would be appreciated :)

Have a nice day.


-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Translations of the 2.2.0 press release

2007-08-31 Thread Cedric Devillers
Michel Meyers wrote:
 Aitor wrote:
 Hi,
 In Spanish: Español, Catalán, Inglés
 In Catalan: Espanyol, Català, Anglés.
 
 In French: espagnol, catalan, anglais

Maybe you should add for French : espagnol (castillan), catalan, anglais  ?


-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now   http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users



-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now   http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users