Re: [Bacula-users] Bat on Windows: Jobs Run pane is empty!

2010-09-29 Thread Damian
Niccolo Rigacci wrote:
> Hi, List!
>
> A new detail: the Jobs Run remains empty only if I run bat as the
> Administrator. Running it as another user, it displays correctly.
> So not a dll problem.
>
> Is it a bug or a feature?
>
> --
> Niccolo Rigacci
> Firenze - Italy

I've had same problem...

try to delete BAT setting.

Windows
regedit
delete
HKCU/Software/"director name"


Unix

delete config file
$HOME/.config/"director name"


or

I've found out - change Director Name to whatever

bat.conf

Director {
   Name = whatever
   DIRport = 9101
   address = bacula
   Password = "password"
}


-- 
=
Damian


--
Start uncovering the many advantages of virtual appliances
and start using them to simplify application deployment and
accelerate your shift to cloud computing.
http://p.sf.net/sfu/novell-sfdev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] LTO-2 Tape Drive stores only about 18-24 GB

2007-03-08 Thread Damian Lubosch
Hello everybody!

I searched in the list for similar problems with backups to tape.
What I want to do is migration of jobs to tape with a Tandberg LTO-2 
SCSI drive.

First of all I use bacula 2.0.2 on NetBSD 3.0.1 on a standard PC.

At the beginning, I ran the btape program and it suggested to add in my 
bacula-sd.conf:

* Hardware End of Medium = No
* Fast Forward Space File = No
* BSF at EOM = yes

So it looks for the Tape Drive as below:

Device {
  Name = TapeDev
  Device Type = Tape
  Media Type = LTO2
  Archive Device = /dev/rst0
  LabelMedia = yes;
  Random Access = No;
#  Block Positioning = no;
  AutomaticMount = yes;
  AlwaysOpen = yes;
  RemovableMedia = yes;
  Hardware End of Medium = No
  Fast Forward Space File = No
  BSF at EOM = yes
#  Requires Mount = yes;
}

(BTW do I have to use the semicolons? They seem optional.)


Anyway, when I run btape's test again, everything seems alright, no 
errors or warnings anymore.

The fill command (tested with two tapes) runs also without any problems. 
It also writes about 190 GB on tape at ca 19 MB/sec.


Now, when I run a Migration Job it finds the apropriate JobIds and 
starts migrating, but stops after about 17-25 GB and requires a new 
Tape. Then it writes another 20 GB and again.

My Backup Jobs are stored as Volumes of 100MB to 170GB on two 500GB 
harddrives first. One Volume-File per job and machine.

My Tapes are brand-new, the drive is new, too, so it should be a 
hardware problem.

Here is an excerpt with the most important config-data concerning my 
problem. (If I was missing something then please ask for it!) Maybe you 
can help me in finding any errors?

# bacula-dir.conf:
#
# Backup Job to HD
JobDefs {
  Name = "JobDef-BSD"
  Type = Backup
  Level = Full
  FileSet = "rootdir_bsd_home"
#  Schedule = "Platte1_SchedBSD"
  Messages = Standard
  Pool = Platte1BSD
  Storage = "BackupStorageBSD"
  Priority = 10
}
Job {
  Name = "amelie-platte1"
  Client = amelie-fd
  JobDefs = "JobDef-BSD"
  Schedule = "Platte1_Amelie"
  Pool = Platte1-Montag
  Write Bootstrap = "/backup/spool/bacula/amelie.bsr"
}
# Backup Job to Tape
JobDefs {
  Name = "JobDef-BSD-Migrate"
  Type = Migrate
  Level = Full
  Client = hugo-fd # Client with Tape-Drive
  FileSet = "rootdir_bsd_home"
 Selection Type = Volume
  Storage = "BackupStorageBSD"# Storage with Harddisk Backup Volumes
  Schedule = "Tape"
  Messages = Standard
  Maximum Concurrent Jobs =4
  Priority = 10
}
Job {
  Name = "montag-tape"
  JobDefs = "JobDef-BSD-Migrate"
  Pool = Platte1-Montag
  Selection Type = Volume
  Selection Pattern = "Platte1-Montag"
}

Pool {
  Name = Platte1-Montag
  Pool Type = Backup
  Recycle = yes
  AutoPrune = yes # Prune expired volumes
  Volume Retention = 7 days
  Label Format = "Platte1-Montag"
  NextPool = Tape-Montag
#  Storage = "BackupStorageBSD"  # Now in Backup-Jobs
  UseVolumeOnce = yes
}

#Migration Job on Tape
Pool {
  Name = Tape-Montag
  Pool Type = Backup
  Storage = "BackupStorageTape"
  Recycle = yes
  AutoPrune = yes # Prune expired volumes
  Volume Retention = 7 days
  Label Format = "Montag-"
  UseVolumeOnce = no
}
Storage {
  Name = BackupStorageTape
  Address = hugo.intra
  SDPort = 9103
  Password = "StPass"
  Device = TapeDev
  Media Type = LTO2
}


Another strange phenomena I would like to ask you is what happens with a 
job that has been only migrated partially (i.e. stopped with an 
error/was aborted by me)? As I noticed, bacula won't try to migrate it 
again?! Is the job lost then?

Thank you very much for any hints solving my problem!

Have a nice day,
Damian


-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Problems with VSS enabled windows backup

2007-03-08 Thread Damian Lubosch
Frank Altpeter wrote:
 > Hi list,

Hi Frank!

 >
 > I was just hitting a little confusing problem in backing up a Windows
 > 2003 Server with bacula (both client and server have version 2.0.2
 > running).
 >
 > The Server is configured to backup C: and D:, the FileSet has "Enable
 > VSS = yes" defined. The client has been installed with winbacula.exe
 > like the other windows hosts im having. The VSS service is up and
 > running on the client.
 >
 > This is what i'm getting as output from the backup job in my bconsole 
gui:
 > [snip] Does anyone has an idea what i'm missing here? It's quite 
confusing to
 > have a full backup with 0 bytes written...
 >
 >

Please post your configuration data. Maybe your FileSet is wrong.

Bye,
Damian

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Problems with VSS enabled windows backup

2007-03-13 Thread Damian Lubosch
Frank Altpeter wrote:
> Hi list,

Hi Frank!

> 
> I was just hitting a little confusing problem in backing up a Windows
> 2003 Server with bacula (both client and server have version 2.0.2
> running).
> 
> The Server is configured to backup C: and D:, the FileSet has "Enable
> VSS = yes" defined. The client has been installed with winbacula.exe
> like the other windows hosts im having. The VSS service is up and
> running on the client.
> 
> This is what i'm getting as output from the backup job in my bconsole gui:
> [snip] 
> Does anyone has an idea what i'm missing here? It's quite confusing to
> have a full backup with 0 bytes written...
> 
> 

Please post your configuration data. Maybe your FileSet is wrong.

Bye,
Damian


-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] LTO-2 Tape Drive stores only about 18-24 GB

2007-03-14 Thread Damian Lubosch
Damian Lubosch wrote:
> Hello everybody!
>
> I searched in the list for similar problems with backups to tape.
> What I want to do is migration of jobs to tape with a Tandberg LTO-2 
> SCSI drive.
>   
Ok, I think that I had thermal problems. Keep your Tape Drives cool ;-)

But now I noticed another problem: during a migration job my drive stops
quite often. The migration job finished with that message:

14-Mar 13:57 hugo-dir: The following 1 JobIds will be migrated: 3
14-Mar 13:57 hugo-dir: Migration using JobId=3 Job=zoii.2007-03-13_21.21.09
14-Mar 13:57 hugo-dir: Bootstrap records written to 
/var/spool/bacula/hugo-dir.restore.1.bsr
14-Mar 13:57 hugo-dir: 
14-Mar 13:57 hugo-dir: The job will require the following
   Volume(s) Storage(s)SD Device(s)
===
14-Mar 13:57 hugo-dir:
14-Mar 13:57 hugo-dir:Platte1-Freitag0001   BackupStorageWin  
BackupDev1   
14-Mar 13:57 hugo-dir: 
14-Mar 13:57 hugo-dir: Start Migration JobId 10, 
Job=freitag-tape.2007-03-14_13.57.54
14-Mar 13:58 hugo-sd: Ready to read from volume "Platte1-Freitag0001" on device 
"BackupDev1" (/backup/bacula-backups).
14-Mar 13:58 hugo-sd: Wrote label to prelabeled Volume "Freitag" on device 
"TapeDev" (/dev/rst0)
14-Mar 13:58 hugo-sd: Forward spacing Volume "Platte1-Freitag0001" to 
file:block 0:209.
14-Mar 18:21 hugo-sd: End of Volume at file 39 on device "BackupDev1" 
(/backup/bacula-backups), Volume "Platte1-Freitag0001"
14-Mar 18:21 hugo-sd: End of all volumes.
14-Mar 18:21 hugo-dir: Bacula 2.0.2 (28Jan07): 14-Mar-2007 18:21:14
  Prev Backup JobId:  3
  New Backup JobId:   11
  Migration JobId:10
  Migration Job:  freitag-tape.2007-03-14_13.57.54
  Backup Level:   Full
  Client: hugo-fd
  FileSet:"rootdir_bsd_home" 2007-03-13 21:20:21
  Read Pool:  "Platte1-Freitag" (From Job resource)
  Read Storage:   "BackupStorageWin" (From user selection)
  Write Pool: "Tape-Freitag" (From Job Pool's NextPool resource)
  Write Storage:  "BackupStorageTape" (From Storage from Pool's 
NextPool resource)
  Start time: 14-Mar-2007 13:58:00
  End time:   14-Mar-2007 18:21:14
  Elapsed time:   4 hours 23 mins 14 secs
  Priority:   10
  SD Files Written:   2,159,801
  SD Bytes Written:   168,088,680,253 (168.0 GB)
  Rate:   10642.6 KB/s
  Volume name(s): Freitag
  Volume Session Id:  1
  Volume Session Time:1173877051
  Last Volume Bytes:  168,285,164,544 (168.2 GB)
  SD Errors:  0
  SD termination status:  OK
  Termination:Migration OK

Sometimes the drive ran like for 10 minutes or so without stoping, sometimes it 
stopped and rewound every ten seconds.

The transfer rate should also be 20MB/s or more, since compression is enabled 
but because of the stopping it dropped to 10MB/s.

I am using a standard PC with 2 recent WesternDigital IDE-ATA harddrives, my 
Tapedrive (Tandberg LTO2) is connected to a SCSI Controller, the computer has 
512 MB RAM, and a 2.8GHz Celeron CPU. During the migration no other jobs are 
running on the machine. 

Sometimes I noticed high database (PostgreSQL) activity. Could it have to do 
with the 2.1 million files for backup?

Do you experience similar behavior? I am afraid that this could shorten my 
drives lifespan 
dramastically.

Are there any setting to speed up the migration?

Thank you very much in advance,
Damian Lubosch


-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Tapedrive stops and rewinds too often

2007-03-16 Thread Damian Lubosch
Hello dear Bacula users!

I have a problem: during a migration job my drive stops
quite often. The migration job finished with that message:

14-Mar 13:57 hugo-dir: The following 1 JobIds will be migrated: 3
14-Mar 13:57 hugo-dir: Migration using JobId=3 Job=zoii.2007-03-13_21.21.09
14-Mar 13:57 hugo-dir: Bootstrap records written to
/var/spool/bacula/hugo-dir.restore.1.bsr
14-Mar 13:57 hugo-dir:
14-Mar 13:57 hugo-dir: The job will require the following
   Volume(s) Storage(s)SD Device(s)
===
14-Mar 13:57 hugo-dir:
14-Mar 13:57 hugo-dir:Platte1-Freitag0001   BackupStorageWin
  BackupDev1
14-Mar 13:57 hugo-dir:
14-Mar 13:57 hugo-dir: Start Migration JobId 10,
Job=freitag-tape.2007-03-14_13.57.54
14-Mar 13:58 hugo-sd: Ready to read from volume "Platte1-Freitag0001" on
device "BackupDev1" (/backup/bacula-backups).
14-Mar 13:58 hugo-sd: Wrote label to prelabeled Volume "Freitag" on
device "TapeDev" (/dev/rst0)
14-Mar 13:58 hugo-sd: Forward spacing Volume "Platte1-Freitag0001" to
file:block 0:209.
14-Mar 18:21 hugo-sd: End of Volume at file 39 on device "BackupDev1"
(/backup/bacula-backups), Volume "Platte1-Freitag0001"
14-Mar 18:21 hugo-sd: End of all volumes.
14-Mar 18:21 hugo-dir: Bacula 2.0.2 (28Jan07): 14-Mar-2007 18:21:14
  Prev Backup JobId:  3
  New Backup JobId:   11
  Migration JobId:10
  Migration Job:  freitag-tape.2007-03-14_13.57.54
  Backup Level:   Full
  Client: hugo-fd
  FileSet:"rootdir_bsd_home" 2007-03-13 21:20:21
  Read Pool:  "Platte1-Freitag" (From Job resource)
  Read Storage:   "BackupStorageWin" (From user selection)
  Write Pool: "Tape-Freitag" (From Job Pool's NextPool resource)
  Write Storage:  "BackupStorageTape" (From Storage from Pool's
NextPool resource)
  Start time: 14-Mar-2007 13:58:00
  End time:   14-Mar-2007 18:21:14
  Elapsed time:   4 hours 23 mins 14 secs
  Priority:   10
  SD Files Written:   2,159,801
  SD Bytes Written:   168,088,680,253 (168.0 GB)
  Rate:   10642.6 KB/s
  Volume name(s): Freitag
  Volume Session Id:  1
  Volume Session Time:1173877051
  Last Volume Bytes:  168,285,164,544 (168.2 GB)
  SD Errors:  0
  SD termination status:  OK
  Termination:Migration OK

Sometimes the drive ran like for 10 minutes or so without stoping,
sometimes it stopped and rewound every ten seconds.

The transfer rate should also be 20MB/s or more, since compression is
enabled but because of the stopping it dropped to 10MB/s.

I ran several more Migration jobs with about 150GB and the same happened.

I am using a standard PC with 2 recent WesternDigital IDE-ATA
harddrives, my Tapedrive (Tandberg LTO2) is connected to a SCSI
controller, the computer has 512 MB RAM, and a 2.8GHz Celeron CPU, 1GBit
Intel connected to a 3com GBit switch. During the migration no other
jobs are running on the machine.

Sometimes I noticed high database (PostgreSQL) activity. The database is
for Bacula only. Could it have to do with the 2.1 million files for backup?

Do you experience similar behavior? I am afraid that this could shorten
my drives lifespan enormously.

Are there any settings to speed up the tapedrive or migration?

Thank you very much in advance,
Damian Lubosch




-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] High database load when migrating many small files

2007-04-23 Thread Damian Lubosch
Hello!

I am using Bacula 2.0.3 with Mysql5 on Debian  Etch with a 2.8GHz
Celeron, 1.5GB RAM, LTO2 drive, 2x 500G (Backup) + 1x80 Disks (System/DB).

I have a machine to backup with about 1-2 million small files (~1kb).
When I run a migration job for about 4 GB of such data the performance
is going down. The tape rewinds very often and the overall performance
is about 3MB/sec. I found out (with top) that the mysql is taking all
the processing power (together with bacula-sd) when migrating many files.
On the other hand, when migrating only few large files -backups is
running fine with 20-30MB/sec.

How can I improve the performance? Are there any tricks I oversaw?

Thanks for any help,
Damian

-
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] After a manual recycle a 1 job limit for a volume

2006-05-09 Thread Damian Sobieralski
Hi all,

 I just found the list but have been using Bacula for about 6 months
now. Great backup software!! I am having a problem that is frustrating
me. This seemed to begin after I upgraded.

 When I try to manually purge and recycle a volume I am running into a
problem.  When I first label a tape I am able to back up several jobs to
it. Then the tape becomes used.  I then try to "purge" and then "update"
the volume status to recycle (this worked before).  All seems to go
well. Then I run a manually job. After that 1 job completes the volume
immediately goes into a "Used" state.  It used to sit in an "append"
state. This is very frustrating.  If I re-label the tape I am able to
save to it again.

 Any ideas?

FreeBSD 5.4
Bacula 1.38.9

- Damian



---
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnk&kid0709&bid&3057&dat1642
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


RE: [Bacula-users] After a manual recycle a 1 job limit for a volume

2006-05-09 Thread Damian Sobieralski
No go. The same problem. :(

MediaId | VolumeName | VolStatus | VolBytes   | VolFiles |
VolRetention | Recycle | Slot | InChanger | MediaType | LastWritten
|
+-++---++--+
--+-+--+---+---+-+
|   5 | Incr-02| Used  |  4,304,146,440 |5 |
1,123,200 |   1 |0 | 1 | DDS-4 | 2006-05-09 16:20:03
|

 It runs ONE job and then toggles the VolStatus to "Used".

Relevant parts of bacula-dir configuration file:

Pool
{
  Name = "Daily Incremental Pool"
  Pool Type = Backup
#  Number Of Volumes = 3
  Maximum Volume Jobs = 12   # allow all 3 servers (3 jobs each night)
to fit on a single volume
# for tue-fri (4*3 = 12)
  Volume Use Duration = 13 days
  Volume Retention = 13 days
  AutoPrune = yes
  Recycle = yes
  Recycle Oldest Volume = yes


}
Job
{
  Name = "Incremental Daily Job"
  Type = Backup
  Level = Incremental
  Client = srv1-fd
  Schedule = "Daily Incremental Schedule"
  Pool = "Daily Incremental Pool"
  Messages = Standard
  Storage = DLT
  Fileset = "srvr1 Full Set"


}
> -Original Message-
> From: Ryan Novosielski [mailto:[EMAIL PROTECTED]
> Sent: Tuesday, May 09, 2006 3:07 PM
> To: Damian Sobieralski
> Cc: bacula-users@lists.sourceforge.net
> Subject: Re: [Bacula-users] After a manual recycle a 1 job limit for a
> volume
> 
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
> 
> Do an update pools from config and an update volumes from pool and see
> what changes. Everything will likely start acting the same way and you
> can work from there. You can do llist media too, and possibly llist
pool
> to check on the status of the configuration.
> 
>   _  _ _  _ ___  _  _  _
>  |Y#| |  | |\/| |  \ |\ |  | |Ryan Novosielski - User Support Spec.
III
>  |$&| |__| |  | |__/ | \| _| |[EMAIL PROTECTED] - 973/972.0922
(2-0922)
>  \__/ Univ. of Med. and Dent.|IST/AST - NJMS Medical Science Bldg -
C630
> 
> 
> Damian Sobieralski wrote:
> > Hi all,
> >
> >  I just found the list but have been using Bacula for about 6 months
> > now. Great backup software!! I am having a problem that is
frustrating
> > me. This seemed to begin after I upgraded.
> >
> >  When I try to manually purge and recycle a volume I am running into
a
> > problem.  When I first label a tape I am able to back up several
jobs to
> > it. Then the tape becomes used.  I then try to "purge" and then
"update"
> > the volume status to recycle (this worked before).  All seems to go
> > well. Then I run a manually job. After that 1 job completes the
volume
> > immediately goes into a "Used" state.  It used to sit in an "append"
> > state. This is very frustrating.  If I re-label the tape I am able
to
> > save to it again.
> >
> >  Any ideas?
> >
> > FreeBSD 5.4
> > Bacula 1.38.9
> >
> > - Damian
> >
> >
> >
> > ---
> > Using Tomcat but need to do more? Need to support web services,
> security?
> > Get stuff done quickly with pre-integrated technology to make your
job
> easier
> > Download IBM WebSphere Application Server v.1.0.1 based on Apache
> Geronimo
> > http://sel.as-us.falkag.net/sel?cmd=k&kid0709&bid&3057&dat1642
> > ___
> > Bacula-users mailing list
> > Bacula-users@lists.sourceforge.net
> > https://lists.sourceforge.net/lists/listinfo/bacula-users
> -BEGIN PGP SIGNATURE-
> Version: GnuPG v1.4.2.2 (MingW32)
> 
> iD8DBQFEYRKKmb+gadEcsb4RAmWRAKCpjIjG3orYRfCy6SUPkFhc3uVp+QCfUX72
> cjs6L74YiXNz6C2KkmHQLV8=
> =Y924
> -END PGP SIGNATURE-



---
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnk&kid0709&bid&3057&dat1642
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] After a manual recycle a 1 job limit for a volume

2006-05-12 Thread Damian Sobieralski
Anyone have any advice on this?

---
No go. The same problem. :(

MediaId | VolumeName | VolStatus | VolBytes   | VolFiles |
VolRetention | Recycle | Slot | InChanger | MediaType | LastWritten
|
+-++---++--+
--+-+--+---+---+-+
|   5 | Incr-02| Used  |  4,304,146,440 |5 |
1,123,200 |   1 |0 | 1 | DDS-4 | 2006-05-09 16:20:03
|

 It runs ONE job and then toggles the VolStatus to "Used".

Relevant parts of bacula-dir configuration file:

Pool
{
  Name = "Daily Incremental Pool"
  Pool Type = Backup
#  Number Of Volumes = 3
  Maximum Volume Jobs = 12   # allow all 3 servers (3 jobs each night)
to fit on a single volume
# for tue-fri (4*3 = 12)
  Volume Use Duration = 13 days
  Volume Retention = 13 days
  AutoPrune = yes
  Recycle = yes
  Recycle Oldest Volume = yes


}
Job
{
  Name = "Incremental Daily Job"
  Type = Backup
  Level = Incremental
  Client = srv1-fd
  Schedule = "Daily Incremental Schedule"
  Pool = "Daily Incremental Pool"
  Messages = Standard
  Storage = DLT
  Fileset = "srvr1 Full Set"


}
> -Original Message-
> From: Ryan Novosielski [mailto:[EMAIL PROTECTED]
> Sent: Tuesday, May 09, 2006 3:07 PM
> To: Damian Sobieralski
> Cc: bacula-users@lists.sourceforge.net
> Subject: Re: [Bacula-users] After a manual recycle a 1 job limit for a

> volume
> 
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
> 
> Do an update pools from config and an update volumes from pool and see

> what changes. Everything will likely start acting the same way and you

> can work from there. You can do llist media too, and possibly llist 
> pool to check on the status of the configuration.
> 
>   _  _ _  _ ___  _  _  _
>  |Y#| |  | |\/| |  \ |\ |  | |Ryan Novosielski - User Support Spec. 
> III  |$&| |__| |  | |__/ | \| _| |[EMAIL PROTECTED] - 973/972.0922 
> (2-0922)  \__/ Univ. of Med. and Dent.|IST/AST - NJMS Medical Science 
> Bldg - C630
> 
> 
> Damian Sobieralski wrote:
> > Hi all,
> >
> >  I just found the list but have been using Bacula for about 6 months

> > now. Great backup software!! I am having a problem that is 
> > frustrating me. This seemed to begin after I upgraded.
> >
> >  When I try to manually purge and recycle a volume I am running into

> > a problem.  When I first label a tape I am able to back up several 
> > jobs to it. Then the tape becomes used.  I then try to "purge" and
then "update"
> > the volume status to recycle (this worked before).  All seems to go 
> > well. Then I run a manually job. After that 1 job completes the 
> > volume immediately goes into a "Used" state.  It used to sit in an
"append"
> > state. This is very frustrating.  If I re-label the tape I am able 
> > to save to it again.
> >
> >  Any ideas?
> >
> > FreeBSD 5.4
> > Bacula 1.38.9
> >
> > - Damian
> >
> >
> >
> > ---
> > Using Tomcat but need to do more? Need to support web services,
> security?
> > Get stuff done quickly with pre-integrated technology to make your 
> > job
> easier
> > Download IBM WebSphere Application Server v.1.0.1 based on Apache
> Geronimo
> > http://sel.as-us.falkag.net/sel?cmd=k&kid0709&bid&3057&dat1642
> > ___
> > Bacula-users mailing list
> > Bacula-users@lists.sourceforge.net
> > https://lists.sourceforge.net/lists/listinfo/bacula-users
> -BEGIN PGP SIGNATURE-
> Version: GnuPG v1.4.2.2 (MingW32)
> 
> iD8DBQFEYRKKmb+gadEcsb4RAmWRAKCpjIjG3orYRfCy6SUPkFhc3uVp+QCfUX72
> cjs6L74YiXNz6C2KkmHQLV8=
> =Y924
> -END PGP SIGNATURE-



---
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnk&kid0709&bid&3057&dat1642
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] One tape, multiple volumes

2007-05-06 Thread Damian Lubosch
Ralf Gross wrote:
> Leonardo Batista schrieb:
>> i need to make backup every day with 1 week retention but using just 1 DLT
>> tape. What the better way to do this with a different volume to each backup?
>>
>> if a restore is necessary, how mount the tape with a prev. volume?
> 
> In bacula, if you are using tapes, one volume is equivalent to one
> tape. Maybe I missing something, but if you really have only one tape,
> it will get hard to do a reasonable backup.
> 
> 
> Ralf
> 

You can use one tape in one pool only. In that pool it is possible to
create multiple volumes for both full and diff backups. I think a volume
is something like a file on a tape. And you can have for sure multiple
files on a tape.

If a restore is necessary you bacula will read the full-backup volume
(file) first and then forwind the tape to the diff volume it needs.

Just try it out ;-)

Hope it helps,
Damian


-
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Schedules and different pools

2007-10-05 Thread Damian Lubosch
Hello !

I need some advice concerning schedules and definitions of pools.

My idea is to do as follows:

I want to have 4 tapes Monday-Thursday for differential backups and I
need 10 tapes for full backups on Fridays (2 tapes per weekend, up to 5
fridays). I need a 4 weeks of full backup of every weekend and the diff
backups from each workday for up to 4 days.

I want to have the differential backups only from the last friday but I
am stuck with the definition of the FullPool statement. It becomes more
complicated when I start to set 1st Monday etc., since it differs every
month.

Maybe my thoughts are too complicated or I missed an option field?


What I did so far is: (sorry for the line break)

Schedule {
  Name = "Cycle"
   Run = Level=Full Pool=Freitag1   1st Friday at 20:00
SpoolData=yes
   Run = Level=Full Pool=Freitag2   2nd Friday at 20:00
SpoolData=yes
   Run = Level=Full Pool=Freitag3   3rd Friday at 20:00
SpoolData=yes
   Run = Level=Full Pool=Freitag4   4th Friday at 20:00
SpoolData=yes
   Run = Level=Full Pool=Freitag5   5th Friday at 20:00
SpoolData=yes
   Run = Level=Differential Pool=Montag FullPool=Freitag1 Monday at
20:00 SpoolData=yes
   Run = Level=Differential Pool=Dienstag   FullPool=Freitag1 Tuesday at
20:00SpoolData=yes
   Run = Level=Differential Pool=Mittwoch   FullPool=Freitag1 Wednesday
at 20:00  SpoolData=yes
   Run = Level=Differential Pool=Donnerstag FullPool=Freitag1 Thursday
at 20:00   SpoolData=yes
}

I would really like to keep each day on one tape as it has to be kept
outside the office.

I would appreciate any ideas :-)

Have a nice weekend,
Damian


-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Job hangs at end of run - Fatal error: job.c

2007-11-29 Thread Damian Brasher
Hi List

I am using bacula version 2.2.5 and have a problem where a job, the 
fourth out of five, just grinds to a halt right after the last file. The 
job does not reach full completion and stalls. This job has been fine 
for months but after upgrading to bacula 2.2.5 this problem has started. 
The job sometimes does complete and sometimes not so the error is 
intermittant. I have a new tapes for a couple of days so I can rule out 
worn tapes and the drive heads have been cleaned thouroughly. As 
mentioned the error only occurs on this job the fourth out of six. The 
speed of the data transfer slowly declines over a number of hours from 
about 11MB/s to 300 KB/s when really the job should have completed and 
the final job started and completed. There are no cron jobs set during 
the backup period or any other obvious underlying system problems. 
Restarting the bacula services sometimes and sometimes does not solve 
the problem temporarily. Network connections between the bacula server 
and client are stable and not through a firewall, 100MB/s tcp LAN with 
no other heavy network load during the job time frame.

Here is the job message after I have manually cancelled it on a brand 
new tape:-

29-Nov 01:20 backup-dir JobId 122: Start Backup JobId 122, 
Job=holly.2007-11-28_23.05.18
29-Nov 01:20 backup-dir JobId 122: Using Device "LTO-2"
29-Nov 01:20 backup-sd JobId 122: Volume "Wednesday1" previously 
written, moving to end of data.
29-Nov 01:20 backup-sd JobId 122: Ready to append to end of Volume 
"Wednesday1" at file=88.
29-Nov 09:39 backup-sd JobId 122: Job write elapsed time = 08:18:42, 
Transfer rate = 384.8 K bytes/second
29-Nov 09:39 holly-fd: holly.2007-11-28_23.05.18 Fatal error: job.c:1594 
Comm error with SD. bad response to Append Data. ERR=Interrupted system call
29-Nov 09:39 backup-sd JobId 122: Job holly.2007-11-28_23.05.18 marked 
to be canceled.
29-Nov 09:39 backup-sd JobId 122: Job holly.2007-11-28_23.05.18 marked 
to be canceled.
29-Nov 09:39 backup-dir JobId 122: Bacula backup-dir 2.2.5 (09Oct07): 
29-Nov-200
7 09:39:42
Build OS: i686-pc-linux-gnu redhat Enterprise release
JobId: 122
Job: holly.2007-11-28_23.05.18
Backup Level: Full
Client: "holly" i686-pc-linux-gnu,redhat,9
FileSet: "holly" 2007-11-14 14:20:00
Pool: "Wednesday" (From Run pool override)
Storage: "LTO-2" (From Job resource)
Scheduled time: 28-Nov-2007 23:05:00
Start time: 29-Nov-2007 01:20:28
End time: 29-Nov-2007 09:39:42
Elapsed time: 8 hours 19 mins 14 secs
Priority: 7
FD Files Written: 47,403
SD Files Written: 0
FD Bytes Written: 11,509,487,341 (11.50 GB)
SD Bytes Written: 0 (0 B)
Rate: 384.2 KB/s
Software Compression: None
VSS: no
Encryption: no
Volume name(s): Wednesday1
Volume Session Id: 40
Volume Session Time: 1195660131
Last Volume Bytes: 96,794,449,920 (96.79 GB)
Non-fatal FD errors: 0
SD Errors:  0
FD termination status: Canceled
SD termination status: Error
Termination: Backup Canceled

I have not upgraded the client software, as I said the other jobs have 
caused no problems at all with the same client version combination.

Here is the director, job and pool definition:-

Director {
Name = backup-dir
DIRport = 9101
QueryFile = "/etc/bacula/query.sql"
WorkingDirectory = "/var/bacula/working"
PidDirectory = "/var/run"
Maximum Concurrent Jobs = 1
Password = "**"
Messages = Daemon
}

Job {
Name = "holly"
Type = Backup
Level = Full 
Client = holly
FileSet = "holly"
Storage = LTO-2
Pool = Default
RunBeforeJob = "/etc/bacula/scripts/runbefore.sh"
Write Bootstrap = "/var/lib/bacula/holly.bsr"
Schedule = "WeeklyCycle"
Messages = Standard
Priority = 7
Max Start Delay = 22h
Max Run Time = 40m
} 

Pool {
Name = Wednesday
Pool Type = Backup
Recycle = yes  
AutoPrune = yes
Volume Retention = 6 days
}

Any help will be gratefully received,

Damian

-- 
Damian Brasher
Systems Admin/Prog
OMII-UK Southampton
Southampton University


-
SF.Net email is sponsored by: The Future of Linux Business White Paper
from Novell.  From the desktop to the data center, Linux is going
mainstream.  Let it simplify your IT future.
http://altfarm.mediaplex.com/ad/ck/8857-50307-18918-4
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Job hangs at end of run - Fatal error: job.c

2007-11-29 Thread Damian Brasher
Arno Lehmann wrote:
>>
>> Job {
>> Name = "holly"
>> Type = Backup
>> Level = Full 
>> Client = holly
>> FileSet = "holly"
>> Storage = LTO-2
>> Pool = Default
>> RunBeforeJob = "/etc/bacula/scripts/runbefore.sh"
>> 
>
> Just a guess, but could you could try redirecting stdout and stderr of 
> this script to /dev/null. With Run After Job scripts, file handles 
> kept open can sometimes cause such a behaviour.
>
> You could do the redirection in this script, like "exec >/dev/null" 
> and "exec 2>&1" right at the top of it.
>
> Arno
The same directive:

RunBeforeJob = "/etc/bacula/scripts/runbefore.sh"

Is present on all previous four jobs and they do not hang in the way this job 
does, so I'm not
sure if I will try this just yet...

Thanks
Damian

-- 
Damian Brasher
Systems Admin/Prog
OMII-UK
Southampton University


-
SF.Net email is sponsored by: The Future of Linux Business White Paper
from Novell.  From the desktop to the data center, Linux is going
mainstream.  Let it simplify your IT future.
http://altfarm.mediaplex.com/ad/ck/8857-50307-18918-4
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Job hangs at end of run - Fatal error: job.c

2007-11-29 Thread Damian Brasher
Martin Simmons wrote:
> Is it still transfering any data or does it just stop dead?  If the data
> transfer rate you see is the average then it could be the latter.
>   
Looks like no data is transferred after the last file, however the rate
slowly reduces over
the span of a few hours.
> If it stops dead then you need to find out what it is waiting for.  It might
> be an external resource or some kind of deadlock bug.
>
> What do status director, status storage and storage client report?
>   
Will let you know tomorrow...
> Also, you could attach gdb to each daemon and run the gdb command
>   
Can you explain in a little more detail how I use gdb in this case, I
use RHEL5 so
run the services as #service bacula-dir/sd/fd restart etc, can I simply
revert to the manual method...
> thread apply all bt
>
> to try to find out what all the threads are doing.
>   
Is this part of the gdb command?

Damian



-
SF.Net email is sponsored by: The Future of Linux Business White Paper
from Novell.  From the desktop to the data center, Linux is going
mainstream.  Let it simplify your IT future.
http://altfarm.mediaplex.com/ad/ck/8857-50307-18918-4
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Job hangs at end of run - Fatal error: job.c

2007-11-29 Thread Damian Brasher
Damian Brasher wrote:
> Martin Simmons wrote:
>> Is it still transfering any data or does it just stop dead?  If the data
>> transfer rate you see is the average then it could be the latter.
>>   
> Looks like no data is transferred after the last file, however the 
> rate slowly reduces over
> the span of a few hours.
>> If it stops dead then you need to find out what it is waiting for.  
>> It might
>> be an external resource or some kind of deadlock bug.
>>
>> What do status director, status storage and storage client report?
>>   
> Will let you know tomorrow...

Actually I will delay the job for an hour or two and try to get the gdb
and status dir/sd/fd to you if you are able to respond  within the next 
couple of hours,

Many Thanks

Damian

-- 
Damian Brasher
Systems Admin/Prog
OMII-UK
Southampton University



-
SF.Net email is sponsored by: The Future of Linux Business White Paper
from Novell.  From the desktop to the data center, Linux is going
mainstream.  Let it simplify your IT future.
http://altfarm.mediaplex.com/ad/ck/8857-50307-18918-4
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Job hangs at end of run - Fatal error: job.c

2007-11-29 Thread Damian Brasher
Martin Simmons wrote:

> Also, you could attach gdb to each daemon and run the gdb command
>
> thread apply all bt
>
>   

Have attached /sbin/bacual-dir /sbin/bacula-fd and /sbin/bacula-sd to 
gdb, run the commands
and will now wait until the error condition repeats.

Will post the output to [(gdb)info file] and [(gdb)thread apply all bt] 
as soon as I have the error condition
as well as the dir/fd and sd status.

Cheers Damian

-- 
Damian Brasher 
Systems Admin/Prog
OMII-UK
Southampton University


-
SF.Net email is sponsored by: The Future of Linux Business White Paper
from Novell.  From the desktop to the data center, Linux is going
mainstream.  Let it simplify your IT future.
http://altfarm.mediaplex.com/ad/ck/8857-50307-18918-4
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Job hangs at end of run - Fatal error: job.c

2007-11-30 Thread Damian Brasher
Martin Simmons wrote:

>
> I would do it by running gdb and then issuing the attach command with the pid
> of the bacula-dir/sd/fd that was started by service.
>
>   
I'll let the system run as I have set out in previous post, last night's 
run was without error...

prev 
post---
Have attached /sbin/bacual-dir /sbin/bacula-fd and /sbin/bacula-sd to 
gdb, run the commands and will now wait until the error condition 
repeats. Will post the output to [(gdb)info file] and [(gdb)thread apply 
all bt] as soon as I have the error condition as well as the dir/fd and 
sd status.
prev 
post---

See what errors I can glean when the job fails, then if that does not 
help I can re-compile with debug switches and attach to gdb.

Damian

-- 
Damian Brasher
Systems Admin/Prog
OMII-UK ECS
Southampton University


-
SF.Net email is sponsored by: The Future of Linux Business White Paper
from Novell.  From the desktop to the data center, Linux is going
mainstream.  Let it simplify your IT future.
http://altfarm.mediaplex.com/ad/ck/8857-50307-18918-4
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Recycle, full backup and tape append problem

2007-12-05 Thread Damian Brasher
Dan Langille wrote:
>
> Run the "update volume" command from bconsole. Set the number of files 
> to be 201.
>
> background: the Catalog thinks there are 200 files on the Volume. The 
> Volume actually has 201 files. You will be correcting this 
> inconsistency with the above command.
Thanks, this has started off last night missed jobs, however I have this 
message and still do not fully understand why the tape moves to the end 
of the last backup instead of starting from the beginning of the tape...

05-Dec 13:40 backup-dir JobId 151: Start Backup JobId 151, 
Job=holly.2007-12-05_13.40.11
05-Dec 13:40 backup-dir JobId 151: Using Device "LTO-2"
05-Dec 13:40 backup-sd JobId 151: Volume "Tuesday1" previously written, 
moving to end of data.
*
*
05-Dec 13:42 backup-sd JobId 151: Ready to append to end of Volume 
"Tuesday1" at file=201.

Damian

-- 
Damian Brasher
Systems Admin/Prog
OMII-UK ECS
Southampton University


-
SF.Net email is sponsored by: The Future of Linux Business White Paper
from Novell.  From the desktop to the data center, Linux is going
mainstream.  Let it simplify your IT future.
http://altfarm.mediaplex.com/ad/ck/8857-50307-18918-4
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Recycle, full backup and tape append problem

2007-12-05 Thread Damian Brasher
Flak Magnet wrote:
> On Wednesday 05 December 2007 8:46:24 am Damian Brasher wrote:
>
>   
>> Thanks, this has started off last night missed jobs, however I have this
>> message and still do not fully understand why the tape moves to the end
>> of the last backup instead of starting from the beginning of the tape...
>> 
>
> I think it's because the volume is appendable, and bacula generally tries to 
> avoid purging volumes as long as it can avoid doing so.  That's a part of the 
> design philosophy even though it's counter-intuitive.  All of the retention 
> settings tell bacula when it MAY recycle volumes that have had all jobs 
> purged from them, not when it MUST.  By holding off on recycling volumes 
> bacula keeps your data in the volumes as long as possible, providing 
> more "fall-back positions" in case of "Oh $excrement" situations.
>   

:) The design philosphy makes a great deal of sense. Have used a Use 
Duration directive to set tapes
to 'used' status after use.

many Thanks Damian

-- 
Damian Brasher
Systems Admin/Prog
OMII-UK ECS
Southampton University


-
SF.Net email is sponsored by: The Future of Linux Business White Paper
from Novell.  From the desktop to the data center, Linux is going
mainstream.  Let it simplify your IT future.
http://altfarm.mediaplex.com/ad/ck/8857-50307-18918-4
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Recycle, full backup and tape append problem

2007-12-05 Thread Damian Brasher
Hi List

After a 2.2.5 rebuild I have a scenario where I am recycling tapes using 
a weekly cycle and pools for each day with more than one tape in the 
weekend pool. The problem I'm experiencing is that even using Volume 
Retention = 6 days in pool Def. my tapes are filling up, the tapes are 
always in status append after a job finishes. I have set the jobs to run 
a FULL backup in all cases at the moment until I have this configuration 
working. This is my first full build as I have only been administering 
bacula until now. I may have inherited some configuration errors, I have 
also been through the manual in some detail.

* Should the tapes not be set to recycle and if so how do I ensure this?
* Why, even with the recycle option set, do the tapes try to append to 
the previous run rather than start fresh, at the beginning of the tape, 
each time the tape is used?
* Why does bacula want to relabel a tape when I already have a mounted 
labelled tape in the drive? **
* I feel I have missed a crucial point regarding full backups, 
recycling, retention periods and how this relates to the catalogue.

**see error below

here is an extract from my bacula-dir.conf to help explain (the files 
set works so I have not included this def.) ...

##
Director {
Name = backup-dir
DIRport = 9101
QueryFile = "/etc/bacula/query.sql"
WorkingDirectory = "/var/bacula/working"
PidDirectory = "/var/run"
Maximum Concurrent Jobs = 1
Password = "**"
Messages = Daemon
}
##
Job {
Name = "holly"
Type = Backup
Level = Full Client = holly
FileSet = "holly"
Storage = LTO-2
Pool = Default
RunBeforeJob = "/etc/bacula/scripts/runbefore.sh"
Write Bootstrap = "/var/lib/bacula/holly.bsr"
Schedule = "WeeklyCycle"
Messages = Standard
Priority = 7
Max Start Delay = 22h
Max Run Time = 40m
}
##
Pool {
Name = Wednesday
Pool Type = Backup
Recycle = yes 
AutoPrune = yes   
Volume Retention = 6 days}

I have a another issue on this list which is still unresolved as I am 
waiting for Gdb to pick up a re - occurrence of this separate error. 
This error has caused the problem above to be exasperated as bacula 
occasionally did not finish the last file of a job, which has cause 
bacula not to accept a tape for append as the last run of a job is a 
mis-match to the catalogue record, i.e. 201 files recorded in the 
catalogue and 200 on the tape.

I have this error message:-
-

05-Dec 10:13 backup-sd JobId 150: Error: Bacula cannot write on tape Volume
"Tuesday1" because:
The number of files mismatch! Volume=201 Catalog=200
05-Dec 10:13 backup-sd JobId 150: Marking Volume "Tuesday1" in Error in
Catalog.
05-Dec 10:13 backup-sd JobId 150: Job pleiades.2007-12-05_10.12.07 waiting.
Cannot find any appendable volumes.
Please use the "label"  command to create a new Volume for:
Storage:  "LTO-2" (/dev/nst0)
Pool: Tuesday
Media type:   LTO-2
--

I really need to fix this new issue before I continue with the old issue 
as I don't care if the job starts over on an existing tape in the pool 
with a full backup from the beginning of a tape.

Any help will be gratefully received,

Damian Brasher

-- 
Damian Brasher
Systems Admin/Prog
OMII-UK ECS
Southampton University


-
SF.Net email is sponsored by: The Future of Linux Business White Paper
from Novell.  From the desktop to the data center, Linux is going
mainstream.  Let it simplify your IT future.
http://altfarm.mediaplex.com/ad/ck/8857-50307-18918-4
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Recycle, full backup and tape append problem

2007-12-06 Thread Damian Brasher
Damian Brasher wrote:
> Flak Magnet wrote:
>   
>> On Wednesday 05 December 2007 8:46:24 am Damian Brasher wrote:
>>
>>   
>> 
>>> Thanks, this has started off last night missed jobs, however I have this
>>> message and still do not fully understand why the tape moves to the end
>>> of the last backup instead of starting from the beginning of the tape...
>>> 
>>>   
>> I think it's because the volume is appendable, and bacula generally tries to 
>> avoid purging volumes as long as it can avoid doing so.  That's a part of 
>> the 
>> design philosophy even though it's counter-intuitive.  All of the retention 
>> settings tell bacula when it MAY recycle volumes that have had all jobs 
>> purged from them, not when it MUST.  By holding off on recycling volumes 
>> bacula keeps your data in the volumes as long as possible, providing 
>> more "fall-back positions" in case of "Oh $excrement" situations.
>>   
>> 
>
> :) The design philosphy makes a great deal of sense. Have used a Use 
> Duration directive to set tapes
> to 'used' status after use.
>
>   
I had a failure setting the tape status to used, if I were to use 
Maximum Volume Jobs I would have the same error I believe, as the manual 
states the tape can no longer be used for appending data. Like setting 
UseVolumeOnce = yes and the status Full. (status 'recycle' and 'append' 
do allow the tape to be recycled) I have used the below and the jobs 
have started:

Pool {
  Name = Thursday
  Pool Type = Backup
  Recycle = yes
  AutoPrune = yes
  Volume Use Duration = 22h  
  Volume Retention = 5 days  
}

I think setting VolumeUseDuration is safer in this case, there is only 
one tape in the pool, this leaves the tape ready to be recycled but 
updated the tape status to used but the catalogue is updated _only_ when 
the next job that used the tape runs, so in this case for the rest of 
the week the catalogue status of the volume remains as 'append' so the 
job is able to start next week, as the job starts bacula changes the 
status to 'used' therefore the tape is written from the start not from 
the end of the previous job/run and the cycle continues - that is my 
interpretation.

Volume retention ensure that the records in the catalogue are pruned 
next week and so I have an accurate record of files available - I think...

Damian

-- 
Damian Brasher 
Systems Admin/Prog
OMII-UK ECS
Southampton University


-
SF.Net email is sponsored by:
Check out the new SourceForge.net Marketplace.
It's the best place to buy or sell services for
just about anything Open Source.
http://sourceforge.net/services/buy/index.php
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Job hangs at end of run - Fatal error: job.c

2007-12-07 Thread Damian Brasher
n (argc=, argv=0x0) at 
stored.c:265
#0  0x00694402 in __kernel_vsyscall ()

3) bacula-fd

(gdb) thread apply all bt

Thread 4 (Thread -1211143280 (LWP 7725)):
#0  0x00d7f402 in __kernel_vsyscall ()
#1  0x003e64dc in pthread_cond_timedwait@@GLIBC_2.3.2 () from 
/lib/libpthread.so.0
#2  0x0807c18f in watchdog_thread (arg=0x0) at watchdog.c:307
#3  0x003e245b in start_thread () from /lib/libpthread.so.0
#4  0x0033a24e in clone () from /lib/libc.so.6

Thread 3 (Thread -1232122992 (LWP 7728)):
#0  0x00d7f402 in __kernel_vsyscall ()
#1  0x003e8e1b in read () from /lib/libpthread.so.0
#2  0x080633ad in read_nbytes (bsock=0x958bff0, ptr=0xb68f42f8 
"��X\t�6\b\b\001", nbytes=4)
 at bnet.c:82
#3  0x08065b76 in BSOCK::recv (this=0x958bff0) at bsock.c:381
#4  0x0805285b in handle_client_request (dirp=0x958bff0) at job.c:229
#5  0x0807c7fc in workq_server (arg=0x808d340) at workq.c:357
#6  0x003e245b in start_thread () from /lib/libpthread.so.0
#7  0x0033a24e in clone () from /lib/libc.so.6

Thread 2 (Thread -1221633136 (LWP 7941)):
#0  0x00d7f402 in __kernel_vsyscall ()
#1  0x003e8e1b in read () from /lib/libpthread.so.0
#2  0x080633ad in read_nbytes (bsock=0x958af88, ptr=0xb72f52f8 
"��X\t�6\b\b\001", nbytes=4)
 at bnet.c:82
#3  0x08065b76 in BSOCK::recv (this=0x958af88) at bsock.c:381
#4  0x0805285b in handle_client_request (dirp=0x958af88) at job.c:229
#5  0x0807c7fc in workq_server (arg=0x808d340) at workq.c:357
#6  0x003e245b in start_thread () from /lib/libpthread.so.0
#7  0x0033a24e in clone () from /lib/libc.so.6

Thread 1 (Thread -1209042720 (LWP 7724)):
#0  0x00d7f402 in __kernel_vsyscall ()
#1  0x00333051 in select () from /lib/libc.so.6
#2  0x0806428f in bnet_thread_server (addrs=0x958a4c8, max_clients=20, 
client_wq=0x808d340,
 handle_client_request=0x80526e0 ) at 
bnet_server.c:161
#3  0x0804b413 in main (argc=0, argv=0x0) at filed.c:227
#0  0x00d7f402 in __kernel_vsyscall ()


Can anyone can shed some light?

TIA Damian

-- 
Damian Brasher
Systems Admin/Prog
OMII-UK
Southampton University



-
SF.Net email is sponsored by:
Check out the new SourceForge.net Marketplace.
It's the best place to buy or sell services for
just about anything Open Source.
http://sourceforge.net/services/buy/index.php
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Job hangs at end of run - Fatal error: job.c

2007-12-08 Thread Damian Brasher
.c:265
#0  0x00694402 in __kernel_vsyscall ()

3) bacula-fd

(gdb) thread apply all bt

Thread 4 (Thread -1211143280 (LWP 7725)):
#0  0x00d7f402 in __kernel_vsyscall ()
#1  0x003e64dc in pthread_cond_timedwait@@GLIBC_2.3.2 () from
/lib/libpthread.so.0
#2  0x0807c18f in watchdog_thread (arg=0x0) at watchdog.c:307
#3  0x003e245b in start_thread () from /lib/libpthread.so.0
#4  0x0033a24e in clone () from /lib/libc.so.6

Thread 3 (Thread -1232122992 (LWP 7728)):
#0  0x00d7f402 in __kernel_vsyscall ()
#1  0x003e8e1b in read () from /lib/libpthread.so.0
#2  0x080633ad in read_nbytes (bsock=0x958bff0, ptr=0xb68f42f8
"��X\t�6\b\b\001", nbytes=4)
 at bnet.c:82
#3  0x08065b76 in BSOCK::recv (this=0x958bff0) at bsock.c:381
#4  0x0805285b in handle_client_request (dirp=0x958bff0) at job.c:229
#5  0x0807c7fc in workq_server (arg=0x808d340) at workq.c:357
#6  0x003e245b in start_thread () from /lib/libpthread.so.0
#7  0x0033a24e in clone () from /lib/libc.so.6

Thread 2 (Thread -1221633136 (LWP 7941)):
#0  0x00d7f402 in __kernel_vsyscall ()
#1  0x003e8e1b in read () from /lib/libpthread.so.0
#2  0x080633ad in read_nbytes (bsock=0x958af88, ptr=0xb72f52f8
"��X\t�6\b\b\001", nbytes=4)
 at bnet.c:82
#3  0x08065b76 in BSOCK::recv (this=0x958af88) at bsock.c:381
#4  0x0805285b in handle_client_request (dirp=0x958af88) at job.c:229
#5  0x0807c7fc in workq_server (arg=0x808d340) at workq.c:357
#6  0x003e245b in start_thread () from /lib/libpthread.so.0
#7  0x0033a24e in clone () from /lib/libc.so.6

Thread 1 (Thread -1209042720 (LWP 7724)):
#0  0x00d7f402 in __kernel_vsyscall ()
#1  0x00333051 in select () from /lib/libc.so.6
#2  0x0806428f in bnet_thread_server (addrs=0x958a4c8, max_clients=20,
client_wq=0x808d340,
 handle_client_request=0x80526e0 ) at
bnet_server.c:161
#3  0x0804b413 in main (argc=0, argv=0x0) at filed.c:227
#0  0x00d7f402 in __kernel_vsyscall ()


Can anyone can shed some light?

TIA Damian

-- 
Damian Brasher
Systems Admin/Prog
OMII-UK
Southampton University




-
SF.Net email is sponsored by:
Check out the new SourceForge.net Marketplace.
It's the best place to buy or sell services for
just about anything Open Source.
http://sourceforge.net/services/buy/index.php
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] BACULA/VMWARE crashes entire system [RESURRECTED]

2007-12-09 Thread Damian Lubosch
Chris Howells schrieb:
> Scott Ruckh wrote:
>
>   
>> I am now running bacula 2.2.6 built from source RPMs.  Now I had a crash
>> with no VMware running.  I did not even have an Xsession running.  This is
>> two times in two weeks where the system crashes while bacula is running.
>>
>> The crash completely shuts the machine off.  It is not just in a hung state.
>> 
>
> Sounds like broken hardware. Start by running memtest86.
>
>   
Or maybe some of your kernel drivers are incompatible with your 
hardware. Try another kernel? Sometimes faulty I/O can cause a system halt.

-
SF.Net email is sponsored by:
Check out the new SourceForge.net Marketplace.
It's the best place to buy or sell services for
just about anything Open Source.
http://sourceforge.net/services/buy/index.php
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Job hangs at end of run - Fatal error: job.c

2007-12-11 Thread Damian Brasher
Martin Simmons wrote:

>>>>> >>>>> On Fri, 07 Dec 2007 10:02:58 +, Damian Brasher said:
>>>>>   
> > 
> > Martin Simmons wrote:
> > 
>   
>> > > Also, you could attach gdb to each daemon and run the gdb command
>> > >
>> > > thread apply all bt
>> > >
>> 
> > 
> > 11th Nov 07---
> > Have attached /sbin/bacual-dir /sbin/bacula-fd and /sbin/bacula-sd to
> > gdb, run the commands
> > and will now wait until the error condition repeats.
> > 
> > Will post the output to [(gdb)info file] and [(gdb)thread apply all bt]
> > as soon as I have the error condition
> > as well as the dir/fd and sd status.
> > --
> > 
> > The error has occurred again. I decided to start bacula with the init 
> > scripts and attached gdb to the running process.
> > 
> > All the details descibing this problem are at the beginning of the thread.
> > 
> > As the job halted this was the output from the bconsole, about 200 lines 
> > of roughly the same as below:-
> > 
> > ...Orphaned buffer:  backup-dir  8 bytes buf=9e1f010 allocated at 
> > workq.c:167
> > Orphaned buffer:  backup-dir 16 bytes buf=9e1eee0 allocated at jcr.c:247
> > Orphaned buffer:  backup-dir528 bytes buf=9e1f038 allocated at jcr.c:255
> > Orphaned buffer:  backup-dir528 bytes buf=9e23ab8 allocated at job.c:953
> > Orphaned buffer:  backup-dir528 bytes buf=9e23ce8 allocated at 
> > job.c:1130
> > Orphaned buffer:  backup-dir  6 bytes buf=9e23f50 allocated at 
> > ua_server.c:105
> > Orphaned buffer:  backup-dir316 bytes buf=9e23f78 allocated at 
> > ua_server.c:192
> > Orphaned buffer:  backup-dir804 bytes buf=9e24338 allocated at 
> > bsock.c:429
> > Orphaned buffer:  backup-dir707 bytes buf=9e24c40 allocated at 
> > mem_pool.c:198
> > Orphaned buffer:  backup-dir707 bytes buf=9e24680 allocated at 
> > mem_pool.c:198
> > Orphaned buffer:  backup-dir 24 bytes buf=9e1f268 allocated at 
> > job.c:1153
> > Orphaned buffer:  backup-dir 40 bytes buf=9e1f2a0 allocated at 
> > alist.c:53...
>   

>That is very unexpected.  The only time I've seen 'Orphaned buffer' messages 
>is after killing the Director.  Could that have happened?  Were there any 
>other messages in the log when the job halted?

The install was compiled with --smartalloc. No, the director was not killed. 
There was nothing else in the logs.

> > 
> >  From command: status all the only unusual output is:
> > 
> > Running Jobs:
> > JobId 166 Job holly.2007-12-06_23.25.09 is running.
> >  Backup Job started: 07-Dec-07 01:41
> >  Files=50,030 Bytes=12,066,088,479 Bytes/sec=407,211
> >  Files Examined=66,825
> >  Processing file: /etc/httpd/conf/httpd.conf
> >  SDReadSeqNo=6 fd=7
> > Director connected at: 07-Dec-07 09:55
> > 
> > The sd status is:
> > 
> > backup-sd Version: 2.2.5 (09 October 2007) i686-pc-linux-gnu redhat 
> > Enterprise release
> > Daemon started 06-Dec-07 11:42, 4 Jobs run since started.
> >   Heap: heap=217,088 smbytes=160,745 max_bytes=161,943 bufs=124 max_bufs=133
> > Sizes: boffset_t=8 size_t=4 int32_t=4 int64_t=8
> > 
> > Running Jobs:
> > Writing: Full Backup job holly JobId=166 Volume="Thursday1"
> >  pool="Thursday" device="LTO-2" (/dev/nst0)
> >  spooling=0 despooling=0 despool_wait=0
> >  Files=50,030 Bytes=12,073,480,104 Bytes/sec=404,607
> >  FDReadSeqNo=646,525 in_msg=552497 out_msg=6 fd=8
>   

>What did the client status show?

I need to capture the error again.

>From the SD backtraces, it looks like the SD is waiting for the FD to confirm 
>that the job has finished.

ok

>Was the gdb attached to the FD while the job was running or did you attach it 
>after it started hanging?  If the latter, are you 100% sure that the bacula-fd 
>process was not restarted somehow?

I am 100% sure the process was not restarted. I attached gdb after the error.

>Do netstat or lsof show any socket connections between the SD and the FD when 
>the job has reached this hanging point?

Will wait for next hang.

>I think you might have to run the SD (and possibly the FD) at debug level 200 
>to collect info about what happens at the end of the job.

Ok, I have upgraded upgraded to 2.2.6 yesterday as I really need to be up and 
running, I will send in another bug report with the extra information you have 
requested if the system hangs again.

thanks so far,

Damian


-- 
Damian Brasher
Systems Admin/Prog
OMII-UK ECS
Southampton University


-
SF.Net email is sponsored by:
Check out the new SourceForge.net Marketplace.
It's the best place to buy or sell services for
just about anything Open Source.
http://sourceforge.net/services/buy/index.php
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Job hangs at end of run - Fatal error: job.c

2007-12-12 Thread Damian Brasher
Martin Simmons wrote:

>>>>> >>>>> On Fri, 07 Dec 2007 10:02:58 +, Damian Brasher said:

>I think you might have to run the SD (and possibly the FD) at debug level 200 
>to collect info about what happens at the end of the job.

 >Ok, I have upgraded upgraded to 2.2.6 yesterday as I really need to be 
up and running, I will send in another bug report with the extra 
information you have requested if the system hangs again.

Our Tape drive has just given up the ghost, now no longer accepting 
tapes and the orange error led is flashing after a short whine on power 
on - looks like the fault is hardware related after all :-/

The drive has just crept past the three years lifespan zone, replacement 
time.

Thanks for the debugging support, it has been very helpful.

Damian


-- 
Damian Brasher
Systems Admin/Prog
OMII-UK ECS
Southampton University



-
SF.Net email is sponsored by:
Check out the new SourceForge.net Marketplace.
It's the best place to buy or sell services for
just about anything Open Source.
http://sourceforge.net/services/buy/index.php
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Missed tape previous day requested

2008-01-22 Thread Damian Brasher
Hi List

When a tape is missed for one of our weekday sets the next evening the 
previous days, missed, tape is still requested even though the job 
should have expired. I have used Volume Use Duration = 22h see below.

Here is an extract of my bacula-dir.conf

Director { # define myself
  Name = backup-dir
  DIRport = 9101 # where we listen for UA connections
  QueryFile = "/etc/bacula/query.sql"
  WorkingDirectory = "/var/bacula/working"
  PidDirectory = "/var/run"
  Maximum Concurrent Jobs = 1
  Password = "" # Console password
  Messages = Daemon
}

Job {
  Name= "webserver"
  Type= Backup
  Level   = Full  # default
  Client  = webserver
  FileSet = "webserver"
  Storage = LTO-2 # Set here instead of pool defs
  Pool= Default
  RunBeforeJob= "/etc/bacula/scripts/runbefore.sh"
  Write Bootstrap = "/var/lib/bacula/webserver.bsr"
  Schedule= "WeeklyCycle"
  Messages= Standard
  Priority= 7
  Max Start Delay = 22h # Time to cancel job after scheduled start time
  Max Run Time = 1h 30m # Max lenght of time for a job to run
}

FileSet {
  Name = "webserver"
  Include {
  Options { signature = MD5 }
  File = /var/www
  File = /var/lib/mysql
  }
  Exclude {
File = /var/www/test
  }
}

Schedule {
  Name = "WeeklyCycle"
  Run = Level=Full Pool=Monday Monday at 23:05
  Run = Level=Full Pool=Tuesday Tuesday at 23:05
  Run = Level=Full Pool=Wednesday Wednesday at 23:05
  Run = Level=Full Pool=Thursday Thursday at 23:05
  #Run = Level=Full Pool=Thursday Thursday at 18:28
  Run = Level=Full Pool=Weekend Friday at 23:05
}

Client {
  Name = webserver
  Address = webserver.omii.ac.uk
  FDPort = 9102
  Catalog = MyCatalog
  Password = "" # password for FileDaemon
  File Retention = 30 days# 30 days
  Job Retention = 6 months# six months
  AutoPrune = yes # Prune expired Jobs/Files
} 

Storage {
  Name = LTO-2
# Do not use "localhost" here   
  Address =  backup.host.ac.uk # N.B. Use a fully qualified name here
  SDPort = 9103
  Password = ""
  Device = LTO-2
  Media Type = LTO-2
}

Pool {
  Name = Monday
  Pool Type = Backup
  Recycle = yes   # Bacula can automatically recycle 
Volumes
  AutoPrune = yes # Prune expired volumes
  Volume Use Duration = 22h   # Max time to use volume from 1st 
write before 'used'
  Volume Retention = 5 days   #
}
end 
--

Tested end of last week, missed Thursday's tape, did not restart bacula 
daemons
then loaded next days tape, bacula could not write to tape and provided 
this error
message:

Device status:
Device "FileStorage" (/tmp) is not open.
Device "LTO-2" (/dev/nst0) is not open.
Device is BLOCKED waiting for mount of volume "Thursday1",
Pool: Thursday
Media type: LTO-2


In Use Volume status:
Thursday1 on device "LTO-2" (/dev/nst0)
Reader=0 writers=0 reserved=1

Can anyone advise?

-- 
Damian Brasher
Systems Admin/Prog
OMII-UK ECS
Southampton University


-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Missed tape previous day requested

2008-01-23 Thread Damian Brasher
Ryan Novosielski wrote:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
>
> Damian Brasher wrote:
>   
>> Ryan Novosielski wrote:
>> 
>>> -BEGIN PGP SIGNED MESSAGE-
>>> Hash: SHA1
>>>
>>> Damian Brasher wrote:
>>>  
>>>   
>>>> Hi List
>>>>
>>>> When a tape is missed for one of our weekday sets the next evening
>>>> the previous days, missed, tape is still requested even though the
>>>> job should have expired. I have used Volume Use Duration = 22h see
>>>> below.
>>>> 
>>>> 
>>> The clock starts ticking on the first tape write.
>>>
>>>   
>>>   
>> Job {Max Start Delay = 22h} should prevent manual intervention then?
>> Damian
>> 
>
> Keep replies on the list -- you'll get more assistance that way.
>   

Sure - seems like my reply list was incorrect this morning - will check.

> That is possible. Seems to me what's actually goofing you up here is
> that the backup is hung waiting for a tape, since it wasn't there when
> the backup started. The next backup will then wait for that one. A
> cancellation should allow the next backup to start anew. It
> theoretically want the wrong tape, but as long as your tape is ABLE to
> be written, AFAIK, it will be written to when it is presented with that
> tape.
>
>   
A cancellation cleared the job queue, It looks as some intervention is 
unavoidable when a
tape is missed as I have been working through this problem for a while 
now. However some
interventions is required, the tape needs to be changed! so to cancel a 
job is not a huge amount
of extra work. It would be nice if entering the next tape was all an 
untrained user needed to do.

Damian

-- 
Damian Brasher
Systems Admin/Prog
OMII-UK ECS
Southampton University


-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Tape status mismatches cause job fails

2008-02-28 Thread Damian Brasher
Hi List

I have had some fairly regular job failures that I have pinpointed to be 
caused by tape status type error mismatches. The most common is that the 
tape status does not match that of the catalogue in a number of ways.

The first is that if the catalogue reports the volume status to be 
error, used or full and I manually change the status of the volume using 
*update -> Volume parameters -> Volume status -> Pool -> PoolName -> Append

How can I be sure that the volume/tape has been updated as well as the 
catalogue?

How can I read the status of a volume/tape without referring to the 
catalogue, to see the true tape status?

The second mismatch issue involves the catalogue reporting a different 
file count than the volume actually contains, this is a typical error:

27-Feb 23:05 backup-sd JobId 514: Error: Bacula cannot write on tape 
Volume "Wednesday1" because:
The number of files mismatch! Volume=103 Catalog=0

How can I avoid an error like this halting a job?

The third mismatch is reported by the status of the storage daemon by 
user intervention. Often when I manually unmount a volume  the storage 
daemon reports back the the device is BLOCKED due to user intervention. 
I'm not 100% sure but this error the causes a job to fail, even when I 
use a script to automatically umount then mount just before the job starts.

Is there a way to eliminate old BLOCKED messages so they do not prevent 
a legitimate job?

Here is a cut down copy of my bacula-dir.conf

Director {
Name = backup-dir
DIRport = 9101
QueryFile = "/etc/bacula/query.sql"
WorkingDirectory = "/var/bacula/working"
PidDirectory = "/var/run"
Maximum Concurrent Jobs = 1
Password = "**"
Messages = Daemon
}

Job {
Name = "holly"
Type = Backup
Level = Full Client = holly
FileSet = "holly"
Storage = LTO-2
Pool = Default
RunBeforeJob = "/etc/bacula/scripts/runbefore.sh"
Write Bootstrap = "/var/lib/bacula/holly.bsr"
Schedule = "WeeklyCycle"
Messages = Standard
Priority = 7
Max Start Delay = 22h
Max Run Time = 30m
}

FileSet {
Name = "holly"
Include {
Options { signature = MD5 }
File = /export/home/dir
}
Exclude {
}
}

Client {
Name = holly
Address = holly
FDPort = 9102
Catalog = MyCatalog
Password = "*" # password for FileDaemon
File Retention = 30 days# 30 days
Job Retention = 6 months# six months
AutoPrune = yes # Prune expired Jobs/Files
}

Pool {
Name = Wednesday
Pool Type = Backup
Recycle = yes 
AutoPrune = yes   
Volume Retention = 6 days}

Regards DB

-- 
Damian Brasher 
Systems Admin/Prog
OMII-UK ECS
Southampton University


-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Tape status mismatches cause job fails

2008-02-29 Thread Damian Brasher
Arno Lehmann wrote:

Hi,

28.02.2008 12:49, Damian Brasher wrote:

> > Hi List
> > 
> > I have had some fairly regular job failures that I have pinpointed to be 
> > caused by tape status type error mismatches. The most common is that the 
> > tape status does not match that of the catalogue in a number of ways.
> > 

Thanks for your response which has helped clarify things,

>If you see this regularly, and with a file count of 0, chances are that 
>>either your tapes are damaged, your tape drive's firmware or the 
driver >is broken, or you've got a serious bug in your Bacula environment.

>You need to fix the underlying problem. 
>The first step to do so is to run the btape tests and tweak the configuration 
>until they all run without >failure.

I have seen occasional errors - I have decided to replace my SCSI card 
and cable. All the evidence I have collected suggests that there is a 
SCSI hardware fault.

Regards

DB

-- 
Damian Brasher
Systems Admin/Prog
OMII-UK ECS
Southampton University


-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Tape status mismatches cause job fails

2008-02-29 Thread Damian Brasher
Arno Lehmann wrote:

Hi,

28.02.2008 12:49, Damian Brasher wrote:


>You need to fix the underlying problem. 
>The first step to do so is to run the btape tests and tweak the configuration 
>until they all run without >failure.

 >I have seen occasional errors - I have decided to replace my SCSI card
 >and cable. All the evidence I have collected suggests that there is a
 >SCSI hardware fault.

Also - I inherited this system but rebuilt Bacula on top of the existing 
hardware - managed to get hold of a working drive after a mechanical 
drive failure.

Basically Bacula is holding the whole rig together:)

Regards

DB

-- 
Damian Brasher
Systems Admin/Prog
OMII-UK ECS
Southampton University



-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Copy Job to Tape problem

2009-12-16 Thread Damian Lubosch
Hello!

Currently, in the evenings my backup system backups first to disk and
then in the mornings I scheduled a copy job to copy the jobs to tape (LTO2).
It worked fine so far, but now I have a new machine and have to do a
full backup with the size of ~330GB. (The other machines are <200GB)

Bacula can copy the files of the big full backup over multiple tapes
successfully but afterwards, it tries again to copy the same job to the
tapedrive again (and again...)
Could this be because the maximum tape space is (uncompressed) 200GB
(and in reality with compression just about 250GB) and the Copy-Job
needs to backup >300GB?



On the list jobs list it looks like:
|   475 | imac-platte1| 2009-12-14 13:05:32 | B| F |   
79,850 | 335,126,587,104 | T |
|   477 | imac-platte1| 2009-12-14 13:05:32 | C| F |   
79,850 | 335,139,703,242 | T |
|   489 | imac-platte1| 2009-12-14 13:05:32 | C| F |   
79,850 | 335,139,703,242 | T |


In my mail-logs:

16-Dec 12:30 zoii-dir JobId 476: Bacula zoii-dir 3.0.3 (18Oct09): 16-Dec-2009 
12:30:29
  Build OS:   x86_64--netbsd netbsd 5.0.1
  Prev Backup JobId:  475
  Prev Backup Job:imac-platte1.2009-12-14_13.05.29_13
  New Backup JobId:   477
  Current JobId:  476
  Current Job:CopyDiskToTapeJob2.2009-12-14_16.32.02_17
  Backup Level:   Full
  Client: zoii-fd
  FileSet:"rootdir_bsd" 2009-11-22 18:29:14
  Read Pool:  "HDDPool2" (From Job resource)
  Read Storage:   "HDD2" (From Pool resource)
  Write Pool: "TapePool" (From Job Pool's NextPool resource)
  Write Storage:  "BackupStorageTape" (From Storage from Pool's 
NextPool resource)
  Catalog:"MyCatalog" (From Client resource)
  Start time: 14-Dec-2009 16:32:05
  End time:   16-Dec-2009 12:30:29
  Elapsed time:   1 day 19 hours 58 mins 24 secs
  Priority:   13
  SD Files Written:   79,850
  SD Bytes Written:   335,139,703,242 (335.1 GB)
  Rate:   2117.1 KB/s
  Volume name(s): Volume03|Volume04|Volume05
  Volume Session Id:  115
  Volume Session Time:1259835631
  Last Volume Bytes:  58,262,206,464 (58.26 GB)
  SD Errors:  0
  SD termination status:  OK
  Termination:Copying OK


16-Dec 19:38 zoii-dir JobId 488: Bacula zoii-dir 3.0.3 (18Oct09): 16-Dec-2009 
19:38:30

  Build OS:   x86_64--netbsd netbsd 5.0.1
  Prev Backup JobId:  475
  Prev Backup Job:imac-platte1.2009-12-14_13.05.29_13
  New Backup JobId:   489
  Current JobId:  488
  Current Job:CopyDiskToTapeJob2.2009-12-15_04.00.00_31
  Backup Level:   Full
  Client: zoii-fd
  FileSet:"rootdir_bsd" 2009-11-22 18:29:14
  Read Pool:  "HDDPool2" (From Job resource)
  Read Storage:   "HDD2" (From Pool resource)
  Write Pool: "TapePool" (From Job Pool's NextPool resource)
  Write Storage:  "BackupStorageTape" (From Storage from Pool's 
NextPool resource)
  Catalog:"MyCatalog" (From Client resource)
  Start time: 16-Dec-2009 14:28:19
  End time:   16-Dec-2009 19:38:30
  Elapsed time:   5 hours 10 mins 11 secs
  Priority:   13
  SD Files Written:   79,850
  SD Bytes Written:   335,139,703,242 (335.1 GB)
  Rate:   18007.6 KB/s
  Volume name(s): Volume05|Volume06
  Volume Session Id:  128
  Volume Session Time:1259835631
  Last Volume Bytes:  132,350,819,328 (132.3 GB)
  SD Errors:  0
  SD termination status:  OK
  Termination:Copying OK



And now it seems to be still running (it wants another tape..., so I
cancelled it):

Running Jobs:
Console connected at 16-Dec-09 22:12
 JobId Level   Name   Status
==
   500 FullCopyDiskToTapeJob2.2009-12-16_04.00.00_44 is waiting for
a mount request
   501 Fullimac-platte1.2009-12-16_04.00.00_45 is running



I am using Bacula 3.0.3 on NetBSD 5 and a single LTO2 drive.

I like very much the copy feature because that way I have real fast
access to the current backups as they are on the disks, and I have the
security to be able to take them home on the tapes. Thus, I'd love to
continue using it ;-)

Can somebody check please if this is a bug in Bacula?

If you need further information, don't hesitate to ask. :-)

Best regards,
Damian


--
This SF.Net email is sponsored by the Verizon Developer Community
Take advantage of Verizon's best-in-c

Re: [Bacula-users] [Bacula-devel] Copy Job to Tape problem

2009-12-18 Thread Damian Lubosch
On Fri, December 18, 2009 15:50, John Drescher wrote:

> I did not answer when you originally posted because I am confused at
> what the problem is.

The problem is that if the copy-job doesn't fit at once on the tape but
finishes successfully (on multiple tapes, see my output logs), it seems
not to be marked as successfully copied. Thus, Bacula wants to copy the
job again until I cancel the job.

Thank you
Damian


--
This SF.Net email is sponsored by the Verizon Developer Community
Take advantage of Verizon's best-in-class app development support
A streamlined, 14 day to market process makes app distribution fast and easy
Join now and get one step closer to millions of Verizon customers
http://p.sf.net/sfu/verizon-dev2dev 
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Connecting to a non-standard PGSQL port

2010-03-30 Thread Damian Lubosch
Hello!

I need to run a secondary postgresql database instance for Bacula on a 
different port, e.g., 5433, because the primary database needs to run separated.
How do I tell Bacula to use the database connection on the different port? 

I read about the way with libdbi, but it does not compile properly on my 
machine. Before I try further, I'd like to know if there is a "native" way to 
change the DB-port?

 I am using Bacula 5.0.1 with Postgresql 8.4 on NetBSD 5.0.1

Thank you!
Damian



--
Download Intel® Parallel Studio Eval
Try the new software tools for yourself. Speed compiling, find bugs
proactively, and fine-tune applications for parallel performance.
See why Intel Parallel Studio got high marks during beta.
http://p.sf.net/sfu/intel-sw-dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Problem connecting to fd on client server

2010-10-13 Thread Damian Ge ; bicki
Albin Vega wrote:
> Hello!
> 
> I am trying to set up a backup-job on a win 2008 server over internet. I 
> have done this successfully on two other servers, but I am having 
> trouble with this one.  Here’s the bacula-dir.config file on the 
> clientserver (that is to be backed up). The fd service is running on 
> both backup-client/server.
> 
client/server firewall or dns problem

You didn't show Client information from server - bacula-fd.conf and 
DNS/hosts.
Did you try "telnet client_IP 9102" from server?


-- 
=
Damian

--
Beautiful is writing same markup. Internet Explorer 9 supports
standards for HTML5, CSS3, SVG 1.1,  ECMAScript5, and DOM L2 & L3.
Spend less time writing and  rewriting code and more time creating great
experiences on the web. Be a part of the beta today.
http://p.sf.net/sfu/beautyoftheweb
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula mysql cleanup

2010-10-18 Thread Damian Ge ; bicki
Holikar, Sachin (ext) wrote:
> Hello,
>  
> We have a Bacula  Version: 2.2.7 installed on SuSE 9 Linux. The database 
> is mysql  Ver 14.7.
>  
> Bacula has been running since couple of years now. We noticed that the 
> mysql partition where "Write Bootstrap" files (*.bsr) are stored is 
> increased alot.
> Particularly "bacula.sql" file is grown to 6 GB now. Which is the 
> *backup of the catalog file.*
> ** 
> Now,
>  
> 1>Is there anyway we can move this (and other .bsr) files to 
> someother location having large space without affecting Bacula 
> functionality?
> 2>Can we somehow reduce the size of this file? Compression ?
> 
> Please let me know if you require any more information.
>  

.bsr
Did you look into this file?
Its looks like scheduler do only Incremental backup of Catalog database.
Just try to rename this file and new record will be write to new one 
after next CatalogDB backup.

-- 
=
Damian

--
Download new Adobe(R) Flash(R) Builder(TM) 4
The new Adobe(R) Flex(R) 4 and Flash(R) Builder(TM) 4 (formerly 
Flex(R) Builder(TM)) enable the development of rich applications that run
across multiple browsers and platforms. Download your free trials today!
http://p.sf.net/sfu/adobe-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula mysql cleanup

2010-10-18 Thread Damian Ge ; bicki
Holikar, Sachin (ext) wrote:
 > Hello,
 >
 > I agree if I rename just the file name in bacula-dir.conf , it will 
start to write in a new file.
 > But the point is the file which has grown in size is "bacula.sql".
 > So this is the actual file in question. Can we do something about 
this file?
 >

I understand - you ask about .bsr file

 >> 1>Is there anyway we can move this (and other .bsr) files to
 >> someother location having large space without affecting Bacula
 >> functionality?
 >> 2>Can we somehow reduce the size of this file? Compression ?


The bacula.sql, if I good remember, it's your main CatalgoDB - sqlite.

The best method is migration to Postgres or MySql.
My database size reduce from 400MB to 150MB when I've done it.
But if you don't want do it, just try to reduce your jobs retention.

There is another way - try to export import procedure.

# sqlite bacula.sql .dump > dump.sql  # dump
# sqlite new_bacula.sql < dump.sql  # restore

sometimes you can reduce db size but no guarantee






-- 
=
Damian

--
Download new Adobe(R) Flash(R) Builder(TM) 4
The new Adobe(R) Flex(R) 4 and Flash(R) Builder(TM) 4 (formerly 
Flex(R) Builder(TM)) enable the development of rich applications that run
across multiple browsers and platforms. Download your free trials today!
http://p.sf.net/sfu/adobe-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula mysql cleanup

2010-10-18 Thread Damian Ge ; bicki
Alan Brown wrote:
> Damian Ge;bicki wrote:
>>
>> The bacula.sql, if I good remember, it's your main CatalgoDB - sqlite.
> 
> The original poster is using mysql, not sqlite.
> 
> bacula.sql is his database dump - it's a plain ascii text file.
> 
> It can be compressed with gzip, bzip2, etc etc and you should keep 
> multiple copies around (use logrotate or a similar program to maintain it)
> 
> 6Gb is a small database dump, as far as Bacula use goes. Mine is in 
> excess of 40Gb.
> 
> 
> sqlite is only included for testing purposes and should not be used in a 
> production environment.
> 
> 
Right, I forgot.
A long time ago I started use dbpipe plugin and I backup my catalog DB 
to tape directly.
I don't need disk space for dump - bacula.sql.
Maybe, this is resolution for you?

-- 
=
Damian

--
Download new Adobe(R) Flash(R) Builder(TM) 4
The new Adobe(R) Flex(R) 4 and Flash(R) Builder(TM) 4 (formerly 
Flex(R) Builder(TM)) enable the development of rich applications that run
across multiple browsers and platforms. Download your free trials today!
http://p.sf.net/sfu/adobe-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula mysql cleanup

2010-10-18 Thread Damian Ge ; bicki
Phil Stracchino wrote:
> On 10/18/10 07:21, Holikar, Sachin (ext) wrote:
> 
> Well, without seeing the script, we're somewhat guessing. Personally, I
> keep my catalog dumps around because if something crashes my database,
> it's faster to just reload the last database than to do a bscan.
> However, I'm currently working on moving to a snapshot-based backup
> instead, in which I won't have a dump file at all, and will in fact
> simply take filesystem snapshots for my incremental database backups and
> keep the most recent snapshot around until the next night's backup has
> been completed.
> 
> 
Yes, but the only 100% way to recover from disaster (backup server) is 
Catalog copy on dedicated media - tape media.


-- 
=
Damian

--
Download new Adobe(R) Flash(R) Builder(TM) 4
The new Adobe(R) Flex(R) 4 and Flash(R) Builder(TM) 4 (formerly 
Flex(R) Builder(TM)) enable the development of rich applications that run
across multiple browsers and platforms. Download your free trials today!
http://p.sf.net/sfu/adobe-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Problem to compile Bacula Admin Tools

2010-10-20 Thread Damian Ge ; bicki
Stéphane Cesbron wrote:
>   Thanks for your reply.
> 
> I know that I don't need BAT on the server that runs bacula. 
> Nevertheless it will be easier as I am really new to bacula.
> Tonight I retried to install qt4 which was already installed.
> I think that I found what causes the trouble.
> It has to come with the settings of environment variables.
> In fact, I've got two different installations of qt on my box
> - qt-3.3 installed in /usr/lib64/qt-3.3
> - qt4 installed in /usr/lib64/qt4
> 

I wonder why you don't use depkgs from bacula.org? 
(depkgs-qt-28Jul09.tar.gz).
Its working fine.


-- 
=
Damian

--
Download new Adobe(R) Flash(R) Builder(TM) 4
The new Adobe(R) Flex(R) 4 and Flash(R) Builder(TM) 4 (formerly 
Flex(R) Builder(TM)) enable the development of rich applications that run
across multiple browsers and platforms. Download your free trials today!
http://p.sf.net/sfu/adobe-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users