Hi,

21.11.2007 20:28,, Jason Joines wrote::
>    I'm using Bacula 2.2.5 with the director daemon and storage daemon 
> running on Linux and writing to disk, currently all to the same file 
> volume.

I hope you planned to use more than one volume - as Bacula never 
overwrites parts of volumes, but either appends to it or, if all data 
in it is pruned, re-writes or truncates that file, you are likely to 
run into serious problems otherwise.

Configuration options regarding this are things like "Volume Use 
Duration" and "Maximum Volume Bytes"

>  I started with the tutorial and have been gradually adding 
> client file daemons.  Today I had a backup running of a just added 
> client, the 9th.  Then I added another client and started a job of it 
> thinking it would either run simultaneously or queue and wait to start 
> until the other job finished.  Instead I got the error:
> 
> Error: Bacula cannot write on disk Volume "sivamanual20071108th" 
> because: The sizes do not match.

There's probably a line stating a difference between VolFiles from 
Catalog and the actual volume contents.

This usually indicates that Bacula crashed while writing to this 
volume. As far as I know, this can also happen if you're running 
simultaneous jobs to one volume. The latter would be caused by a bug 
in Bacula. I never encountered that, though. I suppose that enabling 
spooling prevents this sort of problems, and I always use spooling...

>      Now bacula is waiting on me to label a new volume for the second job:
> 
> 21-Nov 13:07 siva-sd JobId 86: Job osx11daily.2007-11-21_13.07.03 
> waiting. Cannot find any appendable volumes.
> Please use the "label"  command to create a new Volume for:
>      Storage:      "FileStorage" (/backup)
>      Pool:         Default
>      Media type:   File
> 
>      There are no messages concerning the first job but it must've 
> stopped as the sise of the volume "sivamanual20071108th" is not 
> increasing.  What should I have done to avoid the problem?

Don't run simultaneous jobs to one volume without spooling, I guess. 
For disk-based setups, it's easy to create more storage devices to 
allow parallel job execution... and this will get you the extra gain 
of keeping the jobs data together, which can speed up restores and 
might result in a bit slower catalog growth.

>  How do I fix 
> the existing problem,

Usually, you don't. Just mark the volume as "Used" and hold your 
thumbs... to get the volume vs. catalog difference fixed, it would be 
necessary to know the actual volume contents and compare that to the 
catalog. If you're worried about the risk of having unusable backups, 
I can only recommend to do a restore of the job that has its data at 
the end of the volume in question.

> get the first job to continue and get the second 
> job to write to the original volume?

Once you've got a new volume (easily created using the 'label' 
command) things should continue. You should not try to append to the 
original volume, though. Just let Bacula handle the volume and the 
jobs spanning volumes. It does that quite reliable.

And then implement the pools with limited volume sizes, probably a 
limited number of volumes, proper retention times, and automatic 
labeling... add several storage devices, a scratch pool, and a large 
disk array, and you'll never have to worry about that again :-)
(But that's probably a project for the time when you've got the basic 
setup working, did your test restores, and have some sort of intuition 
about how Bacula works).

Arno

-- 
Arno Lehmann
IT-Service Lehmann
www.its-lehmann.de

-------------------------------------------------------------------------
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse0120000070mrt/direct/01/
_______________________________________________
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users

Reply via email to