[Bacula-users] volume recycling

2011-08-31 Thread Alexandre Chapellon

  
  
Hello,
  
  I am using bacula 5.0.2 as delivered by Debian packaging system.
  This morning I have jobs hanged due to no appendable volume in the
  pool.
  the job and file retention for the jobs are both of 1 month. The
  Volume retention for all volumes in the pool is 23 days.
  All volumes in the pool are marked as Used. However, there id one
  volume that was last written more than 2 month ago (2011-06-19
  15:30:31).
  
  I have tried to query the catalog on found out that no job record
  were associated with that volume anymore (bconsole / query / 14 /
  volname)
  
I have tried pruning the volume by hand using
  the `prune` command, it shows the correct volume volume retention
  but prune nothing.
Here is the volume and pool description as
  shown by `list volume` and `show pool`:
  
  |   8 | hosting_incremental_0002 | Used  |   1 |  
426,895,388 |    0 |    1,987,200 |   1 |    0 |
0 | File  | 2011-06-19 15:30:31 |


Pool: name=hosting_Incremental PoolType=Backup
  use_cat=1 use_once=0 cat_files=1
  max_vols=5 auto_prune=1 VolRetention=23 days 
  VolUse=7 days  recycle=1
LabelFormat=${Pool:l}_${NumVols:p/4/0/r}
  CleaningPrefix=*None* LabelType=0
  RecyleOldest=1 PurgeOldest=0 ActionOnPurge=0
  MaxVolJobs=0 MaxVolFiles=0 MaxVolBytes=0
  MigTime=0 secs MigHiBytes=0 MigLoBytes=0
  JobRetention=680 years 11 months 21 days 8 hours 21 mins
20 se FileRetention=5 years 1 month 11 days 17 hours 26 mins 8
secs
  
  Note the huge JobRetention in the pool. The job retention is the
  same in the job description shown in `show job`. I have double
  checked my config files and found the following:
  Client {
    Name = jimbojones-fd
    Address = fqdn-addr
    Catalog = MyCatalog
    Password ="secret"
    FileRetention = 30 days
    JobRetention = 30 days
  }
  
  Does anybody knows why I can't recycle that volume and why I have
  this strange Jobretention value?
  
  Regards.
  
  
  

-- 
  
  
Alexandre Chapellon
Ingénierie des systèmes open sources et
  réseaux.
  Follow me on twitter: @alxgomz
  

  

<>--
Special Offer -- Download ArcSight Logger for FREE!
Finally, a world-class log management solution at an even better 
price-free! And you'll get a free "Love Thy Logs" t-shirt when you
download Logger. Secure your free ArcSight Logger TODAY!
http://p.sf.net/sfu/arcsisghtdev2dev___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Tape drive/changer : Please help - going insane ...........

2011-08-31 Thread Dan Langille
On Aug 31, 2011, at 12:23 PM, morgan_cox wrote:

> Thank you for your feedback everybody
> 
> I have changed the config and retried - It again failed...   [Crying or Very 
> sad]  [Crying or Very sad] 
> 
> However seemed to go a bit further - it failed at a slightly different point 
> (this time I believe it changed the tape!)

I have no idea as the cause of the error.  However, mtx-changer can require 
customizations.  There are many things to consider
as there are many moving parts here.  Permissions can be tricky to get right.

These URLs document what I did.  Don't rush through them.  Take your time.  
Document what you are trying.

This URL shows how I first used mtx-changer and the tests I did: 
http://www.freebsddiary.org/tape-library.php

This URL shows a lot of testing of mtx-changer from the command line: 
http://www.freebsddiary.org/tape-labeling.php

Take it slow.  Understand what you are testing and why.  Ask questions if you 
do not fully understand.

> 
> Here is the error:-
> 
> 
> btape: btape.c:2736 End of tape 814:0. Volume Bytes=812,948,032,512. Write 
> rate = 54.28 MB/s
> 31-Aug 16:51 btape JobId 0: End of medium on Volume "TestVolume1" 
> Bytes=812,948,032,512 Blocks=12,601,501 at 31-Aug-2011 16:51.
> 31-Aug 16:51 btape JobId 0: 3307 Issuing autochanger "unload slot 1, drive 0" 
> command.
> 31-Aug 16:52 btape JobId 0: 3301 Issuing autochanger "loaded? drive 0" 
> command.
> 31-Aug 16:52 btape JobId 0: 3302 Autochanger "loaded? drive 0", result: 
> nothing loaded.
> 31-Aug 16:52 btape JobId 0: 3304 Issuing autochanger "load slot 2, drive 0" 
> command.
> 
> 31-Aug 16:53 btape JobId 0: 3305 Autochanger "load slot 2, drive 0", status 
> is OK.
> Wrote Volume label for volume "TestVolume2".
> 31-Aug 16:53 btape JobId 0: Wrote label to prelabeled Volume "TestVolume2" on 
> device "LTO-4" (/dev/nst0)
> 31-Aug 16:53 btape JobId 0: New volume "TestVolume2" mounted on device 
> "LTO-4" (/dev/nst0) at 31-Aug-2011 16:53.
> btape: btape.c:2311 Wrote 1000 blocks on second tape. Done.
> Done writing 0 records ...
> Wrote End of Session label.
> btape: btape.c:2380 Wrote state file last_block_num1=15500 
> last_block_num2=1001
> btape: btape.c:2398
> 
> 16:53:45 Done filling tapes at 0:1003. Now beginning re-read of first tape ...
> btape: btape.c:2476 Enter do_unfill
> 31-Aug 16:53 btape JobId 0: 3307 Issuing autochanger "unload slot 2, drive 0" 
> command.
> 31-Aug 16:54 btape JobId 0: 3304 Issuing autochanger "load slot 1, drive 0" 
> command.
> 31-Aug 16:55 btape JobId 0: 3305 Autochanger "load slot 1, drive 0", status 
> is OK.
> 31-Aug 16:55 btape JobId 0: Ready to read from volume "TestVolume1" on device 
> "LTO-4" (/dev/nst0).
> Rewinding.
> Reading the first 1 records from 0:0.
> 1 records read now at 1:5084
> Reposition from 1:5084 to 813:15500
> 
> ^[[15~Reading block 15500.
> Error reading block: ERR=block.c:1025 Read zero bytes at 813:0 on device 
> "LTO-4" (/dev/nst0).
> 
> btape: btape.c:2403 do_unfill failed.
> 
> 
> 
> This is my config now:-
> 
> --
> Autochanger {
> Name = "Autochanger"
> Device = LTO-4
> Changer Device = /dev/sg1
> Changer Command = "/usr/libexec/bacula/mtx-changer %c %o %S %a %d"
> }
> 
> Device {
> Name = LTO-4
> Drive Index = 0
> Media Type = LTO-4
> Archive Device = /dev/nst0
> AutomaticMount = yes; # when device opened, read it
> AlwaysOpen = yes;
> AutoChanger = yes
> Alert Command = "sh -c 'tapeinfo -f %c |grep TapeAlert|cat'"
> }
> --
> 
> Any help would we really welcomed.
> 
> Also is there a quicker test to try the changer as the fill test takes about 
> 5-6 hrs to complete ?
> 
> Regards
> 
> +--
> |This was sent by m...@cwcs.co.uk via Backup Central.
> |Forward SPAM to ab...@backupcentral.com.
> +--
> 
> 
> 
> --
> Special Offer -- Download ArcSight Logger for FREE!
> Finally, a world-class log management solution at an even better 
> price-free! And you'll get a free "Love Thy Logs" t-shirt when you
> download Logger. Secure your free ArcSight Logger TODAY!
> http://p.sf.net/sfu/arcsisghtdev2dev
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users

-- 
Dan Langille - http://langille.org


--
Special Offer -- Download ArcSight Logger for FREE!
Finally, a world-class log management solution at an even better 
price-free! And you'll get a free "Love Thy Logs" t-shirt when you
download Logger. Secure your free ArcSight Logger TODAY!
http://p.sf.net/sfu/arcsisghtdev2dev
__

Re: [Bacula-users] retentions

2011-08-31 Thread Rickifer Barros
Hey Hymie...

Job Retention is just for clean up the catalog from old jobs. What make a
Volume Recyclable is the Volume Retention after the volume turns on Full or
Used. I set up my File and Job Retention always longer than my Volume
Retention because when it finish and turns on Purged, the job entries on the
catalog are deleted too.

Rickifer Barros

On Wed, Aug 31, 2011 at 3:00 PM, hymie!  wrote:

>
> Konstantin Khomoutov writes:
> >hymie!  wrote:
> >
> >> A volume will not be recycled until the Volume Retention has expired,
> >> even if all of the backups stored in that volume have expired.  If my
> >> Job Retention is 1 month, and my Volume Retention is 3 months, then my
> >> volumes are not available after 1 month even though the jobs in those
> >> volumes are expired.
> >Possibly there's an error in the last sentence.
>
> Yes, I screwed up the last sentence.  Let me try again please.
>
> If my Job Retention is 1 month, and my Volume Retention is 3 months,
> then at the end of 1 month, my volumes are still marked as "Full", and
> will not be pruned, recycled or written to, even though the jobs in
> those volumes are expired.
>
> >Volumes will continue to live on, just you won't be able to
> >*directly* restore anything from then (that is, simply using bconsole
> >commands) because no catalog entries would exist describing jobs.
> >But that does not prevent you from doing emergency recovery using those
> >volumes
>
> Gotcha.
>
> Actually, what I'm looking for is the opposite.  I'd like Bacula to
> recognize that the jobs on the volume have expired, and the volumes
> can be pruned and recycled.  But it appears that it does not matter when
> the Job Retention is, only the Volume Retention decides when my volumes
> are pruned.
>
> ...or maybe it's "Whichever one is longer" ?
>
> Maybe I need to set up separate pools, based on retention, so that jobs
> with a 1 month retention are stored on volumes with 1 month retention,
> and jobs with a 3 month retention are stored on volumes with 3 month
> retention?
>
> --hymie!http://lactose.homelinux.net/~hymie
> hy...@lactose.homelinux.net
>
> ---
>
>
> --
> Special Offer -- Download ArcSight Logger for FREE!
> Finally, a world-class log management solution at an even better
> price-free! And you'll get a free "Love Thy Logs" t-shirt when you
> download Logger. Secure your free ArcSight Logger TODAY!
> http://p.sf.net/sfu/arcsisghtdev2dev
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users
>
--
Special Offer -- Download ArcSight Logger for FREE!
Finally, a world-class log management solution at an even better 
price-free! And you'll get a free "Love Thy Logs" t-shirt when you
download Logger. Secure your free ArcSight Logger TODAY!
http://p.sf.net/sfu/arcsisghtdev2dev___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] retentions

2011-08-31 Thread hymie!

Konstantin Khomoutov writes:
>hymie!  wrote:
>
>> A volume will not be recycled until the Volume Retention has expired,
>> even if all of the backups stored in that volume have expired.  If my
>> Job Retention is 1 month, and my Volume Retention is 3 months, then my
>> volumes are not available after 1 month even though the jobs in those
>> volumes are expired.
>Possibly there's an error in the last sentence.

Yes, I screwed up the last sentence.  Let me try again please.

If my Job Retention is 1 month, and my Volume Retention is 3 months,
then at the end of 1 month, my volumes are still marked as "Full", and
will not be pruned, recycled or written to, even though the jobs in
those volumes are expired.

>Volumes will continue to live on, just you won't be able to
>*directly* restore anything from then (that is, simply using bconsole
>commands) because no catalog entries would exist describing jobs.
>But that does not prevent you from doing emergency recovery using those
>volumes

Gotcha.

Actually, what I'm looking for is the opposite.  I'd like Bacula to
recognize that the jobs on the volume have expired, and the volumes
can be pruned and recycled.  But it appears that it does not matter when
the Job Retention is, only the Volume Retention decides when my volumes
are pruned.

...or maybe it's "Whichever one is longer" ?

Maybe I need to set up separate pools, based on retention, so that jobs
with a 1 month retention are stored on volumes with 1 month retention,
and jobs with a 3 month retention are stored on volumes with 3 month
retention?

--hymie!http://lactose.homelinux.net/~hymiehy...@lactose.homelinux.net
---

--
Special Offer -- Download ArcSight Logger for FREE!
Finally, a world-class log management solution at an even better 
price-free! And you'll get a free "Love Thy Logs" t-shirt when you
download Logger. Secure your free ArcSight Logger TODAY!
http://p.sf.net/sfu/arcsisghtdev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Tape drive/changer : Please help - going insane ...........

2011-08-31 Thread morgan_cox
Thank you for your feedback everybody

I have changed the config and retried - It again failed...   [Crying or Very 
sad]  [Crying or Very sad] 

However seemed to go a bit further - it failed at a slightly different point 
(this time I believe it changed the tape!)

Here is the error:-


btape: btape.c:2736 End of tape 814:0. Volume Bytes=812,948,032,512. Write rate 
= 54.28 MB/s
31-Aug 16:51 btape JobId 0: End of medium on Volume "TestVolume1" 
Bytes=812,948,032,512 Blocks=12,601,501 at 31-Aug-2011 16:51.
31-Aug 16:51 btape JobId 0: 3307 Issuing autochanger "unload slot 1, drive 0" 
command.
31-Aug 16:52 btape JobId 0: 3301 Issuing autochanger "loaded? drive 0" command.
31-Aug 16:52 btape JobId 0: 3302 Autochanger "loaded? drive 0", result: nothing 
loaded.
31-Aug 16:52 btape JobId 0: 3304 Issuing autochanger "load slot 2, drive 0" 
command.

31-Aug 16:53 btape JobId 0: 3305 Autochanger "load slot 2, drive 0", status is 
OK.
Wrote Volume label for volume "TestVolume2".
31-Aug 16:53 btape JobId 0: Wrote label to prelabeled Volume "TestVolume2" on 
device "LTO-4" (/dev/nst0)
31-Aug 16:53 btape JobId 0: New volume "TestVolume2" mounted on device "LTO-4" 
(/dev/nst0) at 31-Aug-2011 16:53.
btape: btape.c:2311 Wrote 1000 blocks on second tape. Done.
Done writing 0 records ...
Wrote End of Session label.
btape: btape.c:2380 Wrote state file last_block_num1=15500 last_block_num2=1001
btape: btape.c:2398

16:53:45 Done filling tapes at 0:1003. Now beginning re-read of first tape ...
btape: btape.c:2476 Enter do_unfill
31-Aug 16:53 btape JobId 0: 3307 Issuing autochanger "unload slot 2, drive 0" 
command.
31-Aug 16:54 btape JobId 0: 3304 Issuing autochanger "load slot 1, drive 0" 
command.
31-Aug 16:55 btape JobId 0: 3305 Autochanger "load slot 1, drive 0", status is 
OK.
31-Aug 16:55 btape JobId 0: Ready to read from volume "TestVolume1" on device 
"LTO-4" (/dev/nst0).
Rewinding.
Reading the first 1 records from 0:0.
1 records read now at 1:5084
Reposition from 1:5084 to 813:15500

^[[15~Reading block 15500.
Error reading block: ERR=block.c:1025 Read zero bytes at 813:0 on device 
"LTO-4" (/dev/nst0).

btape: btape.c:2403 do_unfill failed.



This is my config now:-

--
Autochanger {
Name = "Autochanger"
Device = LTO-4
Changer Device = /dev/sg1
Changer Command = "/usr/libexec/bacula/mtx-changer %c %o %S %a %d"
}

Device {
Name = LTO-4
Drive Index = 0
Media Type = LTO-4
Archive Device = /dev/nst0
AutomaticMount = yes; # when device opened, read it
AlwaysOpen = yes;
AutoChanger = yes
Alert Command = "sh -c 'tapeinfo -f %c |grep TapeAlert|cat'"
}
--

Any help would we really welcomed.

Also is there a quicker test to try the changer as the fill test takes about 
5-6 hrs to complete ?

Regards

+--
|This was sent by m...@cwcs.co.uk via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--



--
Special Offer -- Download ArcSight Logger for FREE!
Finally, a world-class log management solution at an even better 
price-free! And you'll get a free "Love Thy Logs" t-shirt when you
download Logger. Secure your free ArcSight Logger TODAY!
http://p.sf.net/sfu/arcsisghtdev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Tape drive/changer : Please help - going insane ...........

2011-08-31 Thread morgan_cox
Thank you for your feedback everybody

I have changed the config and retried - It again failed...  [Crying or Very 
sad]  [Crying or Very sad] 

However seemed to go a bit further - it failed at a slightly different point 
(this time I believe it changed the tape)

Here is the error:-


btape: btape.c:2736 End of tape 814:0. Volume Bytes=812,948,032,512. Write rate 
= 54.28 MB/s
31-Aug 16:51 btape JobId 0: End of medium on Volume "TestVolume1" 
Bytes=812,948,032,512 Blocks=12,601,501 at 31-Aug-2011 16:51.
31-Aug 16:51 btape JobId 0: 3307 Issuing autochanger "unload slot 1, drive 0" 
command.
31-Aug 16:52 btape JobId 0: 3301 Issuing autochanger "loaded? drive 0" command.
31-Aug 16:52 btape JobId 0: 3302 Autochanger "loaded? drive 0", result: nothing 
loaded.
31-Aug 16:52 btape JobId 0: 3304 Issuing autochanger "load slot 2, drive 0" 
command.

31-Aug 16:53 btape JobId 0: 3305 Autochanger "load slot 2, drive 0", status is 
OK.
Wrote Volume label for volume "TestVolume2".
31-Aug 16:53 btape JobId 0: Wrote label to prelabeled Volume "TestVolume2" on 
device "LTO-4" (/dev/nst0)
31-Aug 16:53 btape JobId 0: New volume "TestVolume2" mounted on device "LTO-4" 
(/dev/nst0) at 31-Aug-2011 16:53.
btape: btape.c:2311 Wrote 1000 blocks on second tape. Done.
Done writing 0 records ...
Wrote End of Session label.
btape: btape.c:2380 Wrote state file last_block_num1=15500 last_block_num2=1001
btape: btape.c:2398 

16:53:45 Done filling tapes at 0:1003. Now beginning re-read of first tape ...
btape: btape.c:2476 Enter do_unfill
31-Aug 16:53 btape JobId 0: 3307 Issuing autochanger "unload slot 2, drive 0" 
command.
31-Aug 16:54 btape JobId 0: 3304 Issuing autochanger "load slot 1, drive 0" 
command.
31-Aug 16:55 btape JobId 0: 3305 Autochanger "load slot 1, drive 0", status is 
OK.
31-Aug 16:55 btape JobId 0: Ready to read from volume "TestVolume1" on device 
"LTO-4" (/dev/nst0).
Rewinding.
Reading the first 1 records from 0:0.
1 records read now at 1:5084
Reposition from 1:5084 to 813:15500

^[[15~Reading block 15500.
Error reading block: ERR=block.c:1025 Read zero bytes at 813:0 on device 
"LTO-4" (/dev/nst0).

btape: btape.c:2403 do_unfill failed.



This is my config now:-

--
Autochanger {
  Name = "Autochanger"
  Device = LTO-4
  Changer Device = /dev/sg1
  Changer Command = "/usr/libexec/bacula/mtx-changer %c %o %S %a %d"
}

Device {
Name = LTO-4
Drive Index = 0
Media Type = LTO-4
Archive Device = /dev/nst0
AutomaticMount = yes;   # when device opened, read it
AlwaysOpen = yes;
AutoChanger = yes
Alert Command = "sh -c 'tapeinfo -f %c |grep TapeAlert|cat'"
}
--

Any help would we really welcomed

Regards

+--
|This was sent by m...@cwcs.co.uk via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--



--
Special Offer -- Download ArcSight Logger for FREE!
Finally, a world-class log management solution at an even better 
price-free! And you'll get a free "Love Thy Logs" t-shirt when you
download Logger. Secure your free ArcSight Logger TODAY!
http://p.sf.net/sfu/arcsisghtdev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Full-Job gets killed by time limit

2011-08-31 Thread Graham Keeling
On Wed, Aug 31, 2011 at 04:44:59PM +0200, Uwe Bolick wrote:
> Thank you for your answer,
> 
> On Wed, Aug 31, 2011 at 04:20:20PM +0200, Andre Lorenz wrote:
> > ...
> > i have solved this problem, by splitting up the data which has to be
> > backed up.
> > so amount of data which will go to tape is smaller backup is running
> > faster, and restore is much easier ;-)
> > 
> > andre
> > ...
> This has already be done. This was my 3rd try to get it done. After
> the first one I splitted it as usefull as possible but one directory
> ramins to large and I can not getting into smaller peaces, without a
> terrible config.
> 
> I do have a plan B - reading the tapes with bscan an get the missing
> files by an incremental dump - but imho a "professional software like
> bacula should be able to handle such cases.
> 
> Kind regards,
>   Uwe

Hello,

How about copying the files from the remote location onto a local disk, using
something like rsync? You could then use bacula (or tar) to get the files onto
the tape.

I don't think rsync will just suddenly time out after six days. And if it does,
you can just start it up again from where it was interrupted. 


--
Special Offer -- Download ArcSight Logger for FREE!
Finally, a world-class log management solution at an even better 
price-free! And you'll get a free "Love Thy Logs" t-shirt when you
download Logger. Secure your free ArcSight Logger TODAY!
http://p.sf.net/sfu/arcsisghtdev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Full-Job gets killed by time limit

2011-08-31 Thread Uwe Bolick
Thank you for your answer,

On Wed, Aug 31, 2011 at 04:20:20PM +0200, Andre Lorenz wrote:
> ...
> i have solved this problem, by splitting up the data which has to be
> backed up.
> so amount of data which will go to tape is smaller backup is running
> faster, and restore is much easier ;-)
> 
> andre
> ...
This has already be done. This was my 3rd try to get it done. After
the first one I splitted it as usefull as possible but one directory
ramins to large and I can not getting into smaller peaces, without a
terrible config.

I do have a plan B - reading the tapes with bscan an get the missing
files by an incremental dump - but imho a "professional software like
bacula should be able to handle such cases.

Kind regards,
  Uwe
-- 
 Uwe Bolick
 Zentrum für Astronomie und Astrophysik
 Technische Universität Berlin
 EW 8-1, Hardenbergstr. 36, D-10623 Berlin (Germany)

--
Special Offer -- Download ArcSight Logger for FREE!
Finally, a world-class log management solution at an even better 
price-free! And you'll get a free "Love Thy Logs" t-shirt when you
download Logger. Secure your free ArcSight Logger TODAY!
http://p.sf.net/sfu/arcsisghtdev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Full-Job gets killed by time limit

2011-08-31 Thread Uwe Bolick

Thank you for your reply.

On Wed, Aug 31, 2011 at 03:40:58PM +0200, Marcello Romani wrote:
> ...
> > Bacula has a hardcoded time limit on jobs of 6 days. Kern called it an
> > "insanity check" as any job that runs that long isn't all that useful...
> >
> > See
> > http://www.mail-archive.com/bacula-users@lists.sourceforge.net/msg20159.html
> > for a discussion on the mailing list from the past, and a pointer on
> > where to change the time limit in the code if you wish.
> >
> 
> I suggest emitting a warning on service startup if the user sets a max 
> run time greater than this hardcoded limit.
> ...

I know this discussion, but I remember e-mails in later discussions,
saying that this becomes configurable. (But perhaps my memory misled
me.) And the only configuration I can find in the docs, looking
appropriate for this task, are the MaxRunTime variables.

And no, there are no warnings or errors.
This is the output from test with debugging on, I am using to check my
configuration:

> bacula-dir: dird.c:719-0 Job "data.ground.astep", field "fullmaxruntime": 
> getting default.
> bacula-dir: dird.c:781-0 Job "data.ground.astep", field "fullmaxruntime" 
> def_lvalue=1209600 item 31 offset=224

Looks fine for me...

-- 
 Uwe Bolick
 Zentrum für Astronomie und Astrophysik
 Technische Universität Berlin
 EW 8-1, Hardenbergstr. 36, D-10623 Berlin (Germany)

--
Special Offer -- Download ArcSight Logger for FREE!
Finally, a world-class log management solution at an even better 
price-free! And you'll get a free "Love Thy Logs" t-shirt when you
download Logger. Secure your free ArcSight Logger TODAY!
http://p.sf.net/sfu/arcsisghtdev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Archive Jobs

2011-08-31 Thread Peter Allgeyer
Hi!

I want to ask, if there is any development to support archive jobs as
suggested in the Bacula Project Design Blog:
http://sourceforge.net/apps/wordpress/bacula/2009/09/26/archive/

Regards
-- 
Peter Allgeyer  Salzburg|Research Forschungsgesellschaft mbH
Dipl.-Inform. Univ. Jakob-Haringer-Strasse 5/III
phone +43.662.2288-264   5020 Salzburg | Austria
fax   +43.662.2288-222http://www.salzburgresearch.at


--
Special Offer -- Download ArcSight Logger for FREE!
Finally, a world-class log management solution at an even better 
price-free! And you'll get a free "Love Thy Logs" t-shirt when you
download Logger. Secure your free ArcSight Logger TODAY!
http://p.sf.net/sfu/arcsisghtdev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Full-Job gets killed by time limit

2011-08-31 Thread Andre Lorenz
On 31.08.2011 14:53, Uwe Bolick wrote:
> Hi,
>
> I have to backup a remote site with several TB of data over a "slow"
> connection (varying between 10-15 GB per hour). Under these
> circumstances, one of my initial "Full" jobs gets killed after 6 days:
>
>> 30-Aug 21:48 gothmog-dir JobId 34535: Fatal error: Network error with FD 
>> during Backup: ERR=Interrupted system call
>> 30-Aug 21:48 tapelib-sd JobId 34535: JobId=34535 
>> Job="data.ground.astep.2011-08-24_21.47.40_26" marked to be canceled.
>> 30-Aug 21:48 tapelib-sd JobId 34535: Job write elapsed time = 143:59:34, 
>> Transfer rate = 3.536 M Bytes/second
>> 30-Aug 21:48 gothmog-dir JobId 34535: Fatal error: No Job status returned 
>> from FD.
>> 30-Aug 21:48 gothmog-dir JobId 34535: Error: Watchdog sending kill after 
>> 518405 secs to thread stalled reading File daemon.
>> 30-Aug 21:48 gothmog-dir JobId 34535: Error: Bacula gothmog-dir 5.0.2 
>> (28Apr10): 30-Aug-2011 21:48:35
> The configuration for the failed job was:
>
> JobDefs {
>   Name = "TapeJobX"
>   Type = Backup
>   Level = Incremental
>   Schedule = "MonthlyTapeCycle"
>   Storage = Tapelib
>   Messages = Standard
>   Pool = Tapearchive
>   SpoolData = yes
>   Priority = 15
>   Allow Mixed Priority = yes
>   Max Start Delay = 20h
>   Full Max Run Time = 14d
> }
>
> Job {
>   Name = "data.ground.astep"
>   Client = "Warg01"
>   FileSet = "data.ground.astep"
>   JobDefs = "TapeJobX"
>   Accurate = yes
>   Write Bootstrap = "/var/lib/bacula/data.ground.astep.bsr"
> }
>
> The director, sd, and fd are running on Debian Squeeze systems.
>
> Any hints?
>
> Regards,
>   Uwe Bolick
hello,

i have solved this problem, by splitting up the data which has to be
backed up.
so amount of data which will go to tape is smaller backup is running
faster, and restore is much easier ;-)

andre


--
Special Offer -- Download ArcSight Logger for FREE!
Finally, a world-class log management solution at an even better 
price-free! And you'll get a free "Love Thy Logs" t-shirt when you
download Logger. Secure your free ArcSight Logger TODAY!
http://p.sf.net/sfu/arcsisghtdev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Full-Job gets killed by time limit

2011-08-31 Thread Marcello Romani
Il 31/08/2011 15:33, Jeremy Maes ha scritto:
> Op 31/08/2011 14:53, Uwe Bolick schreef:
>> Hi,
>>
>> I have to backup a remote site with several TB of data over a "slow"
>> connection (varying between 10-15 GB per hour). Under these
>> circumstances, one of my initial "Full" jobs gets killed after 6 days:
> Bacula has a hardcoded time limit on jobs of 6 days. Kern called it an
> "insanity check" as any job that runs that long isn't all that useful...
>
> See
> http://www.mail-archive.com/bacula-users@lists.sourceforge.net/msg20159.html
> for a discussion on the mailing list from the past, and a pointer on
> where to change the time limit in the code if you wish.
>
> Regards,
> Jeremy
>
>    DISCLAIMER 
> http://www.schaubroeck.be/maildisclaimer.htm
>
> --
> Special Offer -- Download ArcSight Logger for FREE!
> Finally, a world-class log management solution at an even better
> price-free! And you'll get a free "Love Thy Logs" t-shirt when you
> download Logger. Secure your free ArcSight Logger TODAY!
> http://p.sf.net/sfu/arcsisghtdev2dev
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users

I suggest emitting a warning on service startup if the user sets a max 
run time greater than this hardcoded limit.

-- 
Marcello Romani

--
Special Offer -- Download ArcSight Logger for FREE!
Finally, a world-class log management solution at an even better 
price-free! And you'll get a free "Love Thy Logs" t-shirt when you
download Logger. Secure your free ArcSight Logger TODAY!
http://p.sf.net/sfu/arcsisghtdev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Full-Job gets killed by time limit

2011-08-31 Thread Jeremy Maes
Op 31/08/2011 14:53, Uwe Bolick schreef:
> Hi,
>
> I have to backup a remote site with several TB of data over a "slow"
> connection (varying between 10-15 GB per hour). Under these
> circumstances, one of my initial "Full" jobs gets killed after 6 days:
Bacula has a hardcoded time limit on jobs of 6 days. Kern called it an 
"insanity check" as any job that runs that long isn't all that useful...

See 
http://www.mail-archive.com/bacula-users@lists.sourceforge.net/msg20159.html 
for a discussion on the mailing list from the past, and a pointer on 
where to change the time limit in the code if you wish.

Regards,
Jeremy

  DISCLAIMER 
http://www.schaubroeck.be/maildisclaimer.htm

--
Special Offer -- Download ArcSight Logger for FREE!
Finally, a world-class log management solution at an even better 
price-free! And you'll get a free "Love Thy Logs" t-shirt when you
download Logger. Secure your free ArcSight Logger TODAY!
http://p.sf.net/sfu/arcsisghtdev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Full-Job gets killed by time limit

2011-08-31 Thread Uwe Bolick

Hi,

I have to backup a remote site with several TB of data over a "slow"
connection (varying between 10-15 GB per hour). Under these
circumstances, one of my initial "Full" jobs gets killed after 6 days:

> 30-Aug 21:48 gothmog-dir JobId 34535: Fatal error: Network error with FD 
> during Backup: ERR=Interrupted system call
> 30-Aug 21:48 tapelib-sd JobId 34535: JobId=34535 
> Job="data.ground.astep.2011-08-24_21.47.40_26" marked to be canceled.
> 30-Aug 21:48 tapelib-sd JobId 34535: Job write elapsed time = 143:59:34, 
> Transfer rate = 3.536 M Bytes/second
> 30-Aug 21:48 gothmog-dir JobId 34535: Fatal error: No Job status returned 
> from FD.
> 30-Aug 21:48 gothmog-dir JobId 34535: Error: Watchdog sending kill after 
> 518405 secs to thread stalled reading File daemon.
> 30-Aug 21:48 gothmog-dir JobId 34535: Error: Bacula gothmog-dir 5.0.2 
> (28Apr10): 30-Aug-2011 21:48:35

The configuration for the failed job was:

JobDefs {
  Name = "TapeJobX"
  Type = Backup
  Level = Incremental
  Schedule = "MonthlyTapeCycle"
  Storage = Tapelib
  Messages = Standard
  Pool = Tapearchive
  SpoolData = yes
  Priority = 15
  Allow Mixed Priority = yes
  Max Start Delay = 20h
  Full Max Run Time = 14d
}

Job {
  Name = "data.ground.astep"
  Client = "Warg01"
  FileSet = "data.ground.astep"
  JobDefs = "TapeJobX"
  Accurate = yes
  Write Bootstrap = "/var/lib/bacula/data.ground.astep.bsr"
}

The director, sd, and fd are running on Debian Squeeze systems.

Any hints?

Regards,
  Uwe Bolick
-- 
 Uwe Bolick
 Zentrum für Astronomie und Astrophysik
 Technische Universität Berlin
 EW 8-1, Hardenbergstr. 36, D-10623 Berlin (Germany)

--
Special Offer -- Download ArcSight Logger for FREE!
Finally, a world-class log management solution at an even better 
price-free! And you'll get a free "Love Thy Logs" t-shirt when you
download Logger. Secure your free ArcSight Logger TODAY!
http://p.sf.net/sfu/arcsisghtdev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users