Hello Chris,

On Thu, Sep 15, 2022 at 4:30 PM Chris Wilkinson <winstonia...@gmail.com>
wrote:

> Thanks for that advice.
>
> I currently have an admin job script#1 as below following some earlier
> advice on this list from Andrea Venturoli (
> https://sourceforge.net/p/bacula/mailman/message/37680362/). This runs
> daily when all other jobs are expected to have completed.
>
> #1
> #!/bin/bash
> #clean up the cached directories
> for pool in docs archive; do
>   for level in full diff incr; do
>     echo "cloud prune AllFromPool Pool=$pool-$level" | bconsole
>     echo "Pruning $pool-$level"
>   done
> done
>
> This cleans up the cache but from what you say it won’t clean up the cloud
> and that appears to be the case.
>
> Should I add another line in there to ‘cloud truncate…’ as well?
>

Please note that all "cloud <something>" commands are related to the local
cache. These commands will not touch the volumes in the remote cloud.

To prune/truncate the volumes in the remote cloud, you need to use the
"prune" and "truncate" commands as you use for normal disk volumes.

The above script will prune the local cache based on the local cache
retention. And if you wish to prune the cloud volumes based in the Volume
Retention value, you need a "prune" command:

#1
#!/bin/bash
#clean up the cloud volumes
for pool in docs archive; do
  for level in full diff incr; do
    echo "prune AllFromPool Pool=$pool-$level" | bconsole
    echo "Pruning $pool-$level"
  done
done

This happens because the cloud volumes have a local cache retention and the
volume retention.

Both the "cloud truncate" and the "truncate" commands require the storage
associated with the volume as it will physically touch the volumes (the
prune commands only modify Catalog data). Thus, you will need to have a
script that issues the "cloud truncate"/truncate commands for each
pool/storage individually, for example:

1) to truncate volumes in the local cache:
#!/bin/bash
#clean up the cached directories
for pool in docs archive; do
  for level in full diff incr; do
    echo "cloud truncate Pool=pool1-$level" Storage=<storage_used_by_pool1>
| bconsole
    echo "Truncating pool1-$level"
    echo "cloud truncate Pool=pool2-$level" Storage=<storage_used_by_pool2>
| bconsole
    echo "Truncating pool2-$level"
  done
done

1) to truncate volumes in the cloud:
#!/bin/bash
#clean up the cached directories
for pool in docs archive; do
  for level in full diff incr; do
    echo "truncate Pool=pool1-$level" Storage=<storage_used_by_pool1> |
bconsole
    echo "Truncating pool1-$level in the cloud"
    echo "truncate Pool=pool2-$level" Storage=<storage_used_by_pool2> |
bconsole
    echo "Truncating pool2-$level in the cloud"
  done
done
You can add those scripts in the same admin job, but after the one that
prunes the volumes from both the local cache and from the cloud.
Hope it helps!

Best,
Ana


> On 15 Sep 2022, at 07:45, Ana Emília M. Arruda <emiliaarr...@gmail.com>
> wrote:
>
> Hello Chris,
>
> When dealing with cloud storages, your volumes will be in a local cache
> and in the remote cloud storage.
>
> To clean the local cache, you must use the "cloud prune" and the "cloud
> truncate" commands. Having "Truncate Cache=AfterUpload" in the cloud
> resource will guarantee that the part file is deleted from the local cache
> after and only after it is correctly uploaded to the remote cloud. Because
> it may happen that a part file cannot be uploaded due to, for example,
> connection issues, you should create an admin job to frequently run both
> the "cloud prune" and the "cloud truncate" commands.
>
> Then, to guarantee the volumes in the remote cloud are cleaned, you need
> both the "prune volumes" and the "truncate volumes" commands (the last one
> will delete the date in the volume and reduce the volume file to its label
> only).
>
> Please note the prune command will respect the retention periods you have
> defined for the volumes, but the purge command doesn't. Thus, I wouldn't
> use the purge command to avoid data loss.
>
> Best regards,
> Ana
>
> On Wed, Sep 14, 2022 at 6:22 PM Chris Wilkinson <winstonia...@gmail.com>
> wrote:
>
>> I'm backing up to cloud storage (B2). This is working fine but I'm not
>> clear on whether volumes on B2 storage are truncated (i.e. storage
>> recovered) when a volume is purged by the normal pool expiry settings. I've
>> set run after jobs in the daily catalog backup to truncate volumes on purge
>> for each of my pools. E.g
>>
>> ...
>> Runscript {
>>   When = "After"
>>   RunsOnClient = no
>>   Console = "purge volume action=truncate pool=docs-full storage=cloud-sd"
>>  }
>> ...
>>
>> The local cache is being cleared but I think this is because I set the
>> option "Truncate Cache=AfterUpload" in the cloud resource to empty the
>> local cache after each part is uploaded.
>>
>> I'd like of course that storage (and cost) doesn't keep growing out of
>> control and wonder if there is a config option(s) to ensure this doesn't
>> happen.
>>
>> Any help or advice on this would be much appreciated.
>>
>> Thanks
>> Chris Wilkinson
>> _______________________________________________
>> Bacula-users mailing list
>> Bacula-users@lists.sourceforge.net
>> https://lists.sourceforge.net/lists/listinfo/bacula-users
>>
>
>
_______________________________________________
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users

Reply via email to