Martin Simmons wrote:
>
> I think they are considered in the order that the Job definitions appear in
> the bacula-dir.conf.
>
> To guarantee the order, it is best to use different priorities. I use
> priorities 1, 2, 3 etc for the backups and 201, 202, 203 etc for the
> corresponding verifies
Martin Simmons wrote:
>
> I think this can only be caused by a bug, so it is probably a good idead to
> upgrade to Bacula 3.0.1 to see if that fixes it.
>
> Are you using simultaneous jobs? What values does the error give for the
> sizes of the blocks?
>
> __Martin
>
> --
Greetings!
I've been working on adding automated Verify jobs to my configuration. I'd like
to abuse my tape drive as little as possible, so I have the backups set to run
as a group, then the verifies set to run at a later time as a group.
On a normal weeknight (like tonight), I have two Increm
Mark,
My understanding is that Bacula will hold onto the Volume info as long as
possible. Also, it won't delete the files from the tape/disk, just the
reference to them in the catalog. If you find that you're failing several
backups in a row and want to make sure the most recent good backup is
Hello,
We've been intermittently having an issue with backups failing due to the error
"Spool block too big". It's happened exactly 10 times since 4/27/09. It
generally happens during large backups (900GB+).
The most recent error happened after the data had been spooled, and was being
written
Thanks for the suggestions Dirk. I was afraid that it would come down to
scripting; being a temp Student admin I'm trying to keep things as
straightforward as possible for the next person who has to deal with it.
I was hoping I had missed something and that the Job resource could also set
the
Trying to work around the file/job retention question above, is it possible to
set up multiple Client resources for a single client machine, without running
multiple file daemons on the client?
The way I read the manual, the name given to the client file daemon is what
needs to be used for the
Greetings!
Is there any way, in Bacula 2.4.4, to override the File and/or Job retention
period in either the Job, Schedule, or FileSet resource? The reason I ask is
that we have a couple of filesets that are extremely large, but only on a
handful of clients. We don't have the space to restore
Greetings!
I'm hopeful that someone can give us a little help. We currently run Bacula
2.4.4. We have backups run on two storage devices, and each device has its own
spool file directory. We've created a script that will check the spool
directory at the start of each job to make sure that it's