Yes, it's this kind of thing as I'm looking for the first part of the 
optimization I want to achieve, I would have preferred to have something that 
cleans the volume (RAZ clean) instead of removing, but that could be a solution.

Are you using auto-labeling to recreate the deleted volume on demand, or does 
your script do it for you?  By a bconsole commands script maybe ?  or a bash 
script? Cause I asked myself If I should query the database by sql query  and 
then delete volumes in consequence or if bconsole could be able to do it with 
inner commands.



But I’d happy to adapt it to my conf.

My mail address is open for sharing if you want. (zip, gz or even wetransfer)



Great thanks,

Lionel



De : Chris Wilkinson <winstonia...@gmail.com>
Envoyé : mercredi 29 novembre 2023 13:03
À : Lionel PLASSE <pla...@cofiem.fr>
Cc : bacula-users <bacula-users@lists.sourceforge.net>
Objet : Re: [Bacula-users] Migration Job - Volume data deletion



I have a script that deletes physical disc and cloud volumes that are not 
present in the catalog, perhaps because a job was manually deleted or migrated. 
Is that what you want to achieve?



It's part of a collection of Bacula scripts that I use. It's too big to post 
here but I'd be happy to share it. It's somewhat customised to my setup so 
you'd likely need to modify it for your own purposes.

-Chris-



On Wed, 29 Nov 2023, 11:46 Lionel PLASSE, 
<pla...@cofiem.fr<mailto:pla...@cofiem.fr>> wrote:

   Hello,

   I question regarding migration job and volume cleaning :

   For migration job, old jobs from migrated volume to next pool's volume are 
deleted from the catalog, but the migrated volume file still contains data (I 
use File volume on disk ).  So the data amount is doubled. (The catalog is well 
cleaned)
   The volume might be cleaned in a future scheduled job  if it passes  from 
"used" to "append" regarding retention periods.

   Is there a simple way to delete those data when the volume is used once or 
contain only the migrated job's data. Effectively after the migration   there 
is no more catalogued job for this volume but the volume still contains data 
physically.
   Is it possible to clean the migrated volume (like a backup job do prior to 
the backup operation when passing from "used" to "append") but at the end of 
the migration that there is not twice as much the physical data.

   Should I use I bconsole script in a after run script ?



   By similar way, when a job went on fatal error for whatever cause. However , 
the "Vol.Jobs" is already incremented so when the job is rescheduled (or 
manually re-run)  the max.vol.jobs<http://max.vol.jobs> can be reached and thus 
can block the future schedule for backup for 1 missing  Max vol job.  How to 
decrease the job count in order not to be in "used"  state , when fatal error 
occurs , with a bconsole script or a bash script,  but I don't want to increase 
the max.vol.jobs<http://max.vol.jobs> like I do now, because I should remember 
I have done so and decrease after days the max.vol.jobs<http://max.vol.jobs>.

   If someone understand what I say.

   _______________________________________________
   Bacula-users mailing list
   Bacula-users@lists.sourceforge.net<mailto:Bacula-users@lists.sourceforge.net>
   https://lists.sourceforge.net/lists/listinfo/bacula-users

_______________________________________________
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users

Reply via email to