Re: [Bacula-users] VirtualFull and correct volume management...

2024-05-15 Thread Marco Gaiarin
Mandi! Josh Fisher via Bacula-users
  In chel di` si favelave...

> I use the following in my query.sql file:

OK, thanks! Now i can verify that:

*list media pool=FVG-PP-HFA3-1FilePool 
+-++---+-+-+--+--+-+--+---+---+-+--+-++
| mediaid | volumename | volstatus | enabled | volbytes| volfiles | 
volretention | recycle | slot | inchanger | mediatype | voltype | volparts | 
lastwritten | expiresin  |
+-++---+-+-+--+--+-+--+---+---+-+--+-++
| 311 | HFA3-1001  | Used  |   1 | 671,649,105 |0 |   
31,536,000 |   1 |0 | 0 | File  |   1 |0 | 
2024-03-11 13:32:47 | 25,917,437 |
| 312 | HFA3-1002  | Used  |   1 | 572,824,127 |0 |   
31,536,000 |   1 |0 | 0 | File  |   1 |0 | 
2024-04-19 12:02:12 | 29,278,002 |
| 313 | HFA3-1003  | Used  |   1 | 549,268,745 |0 |   
31,536,000 |   1 |0 | 0 | File  |   1 |0 | 
2024-04-05 12:31:41 | 28,070,171 |
+-++---+-+-+--+--+-+--+---+---+-+--+-++

So all three volumes are 'Used', but:

*query
Available queries:
 1: The default file is empty, see sample-query.sql (in /opt/bacula/scripts 
or /examples) for samples
 2: List Volumes Bacula thinks are in changer
 3: List Jobs stored for a given Volume name
Choose a query (1-3): 3
HFA3-1001
++---+-+--+---+---+-++
| jobid  | name  | starttime   | type | level | files | bytes   
| status |
++---+-+--+---+---+-++
| 17,871 | FVG-PP-HFA3-1 | 2024-03-04 08:30:03 | B| I |26 |  
91,105,874 | T  |
| 17,951 | FVG-PP-HFA3-1 | 2024-03-06 11:00:37 | B| I |26 |  
91,143,228 | T  |
| 18,031 | FVG-PP-HFA3-1 | 2024-03-08 09:30:18 | B| I |26 |  
91,219,430 | T  |
| 18,146 | FVG-PP-HFA3-1 | 2024-03-11 13:31:19 | B| I |   180 | 
103,719,272 | T  |
++---+-+--+---+---+-++
*query
Available queries:
 1: The default file is empty, see sample-query.sql (in /opt/bacula/scripts 
or /examples) for samples
 2: List Volumes Bacula thinks are in changer
 3: List Jobs stored for a given Volume name
3
HFA3-1002
++---+-+--+---+---+-++
| jobid  | name  | starttime   | type | level | files | bytes   
| status |
++---+-+--+---+---+-++
| 19,251 | FVG-PP-HFA3-1 | 2024-03-01 12:00:46 | B| F |   220 | 
202,599,865 | T  |
| 19,380 | FVG-PP-HFA3-1 | 2024-04-10 10:30:29 | B| I |32 |  
92,476,120 | T  |
| 19,674 | FVG-PP-HFA3-1 | 2024-04-17 12:32:09 | B| I |43 |  
94,865,312 | T  |
| 19,717 | FVG-PP-HFA3-1 | 2024-04-18 14:31:21 | B| I |26 |  
91,221,368 | T  |
| 19,761 | FVG-PP-HFA3-1 | 2024-04-19 12:00:46 | B| I |26 |  
91,201,318 | T  |
++---+-+--+---+---+-++
*query
Available queries:
 1: The default file is empty, see sample-query.sql (in /opt/bacula/scripts 
or /examples) for samples
 2: List Volumes Bacula thinks are in changer
 3: List Jobs stored for a given Volume name
Choose a query (1-3): 3
Enter Volume name: HFA3-1003
++---+-+--+---+---+++
| jobid  | name  | starttime   | type | level | files | bytes   
   | status |
++---+-+--+---+---+++
| 18,271 | FVG-PP-HFA3-1 | 2024-03-14 09:30:13 | B| I |56 | 
95,033,470 | T  |
| 18,312 | FVG-PP-HFA3-1 | 2024-03-15 12:30:56 | B| I |26 | 
91,460,402 | T  |
| 19,053 | FVG-PP-HFA3-1 | 2024-04-02 17:58:02 | B| I |   106 | 
94,924,477 | T  |
| 19,086 | FVG-PP-HFA3-1 | 2024-04-03 09:00:07 | B| I |26 | 
91,090,544 | T  |
| 19,129 | FVG-PP-HFA3-1 | 2024-04-04 12:30:51 | B| I |23 | 
85,044,862 | T  |
| 19,172 | FVG-PP-HFA3-1 | 2024-04-05 12:30:55 | B| I |26 | 
91,252,044 | T  |
++---+-+--+---+---+++

Only volume 3 have 6 jobs within, while volume 1 and 2 have 4 and 5
respectively, so they are nod really 

Re: [Bacula-users] HP 1/8 G2 Autoloader

2024-05-15 Thread Rob Gerber
Stefan, are your bacula catalog backups being made to a disk volume, as is
default, or to a tape volume? If being made to a disk volume you could
restore a catalog backup. If your catalog backups were being made to that
same machine whose backups were purged, and you lost the database entries
for the catalog backups, you can still run bscan against the catalog
backups to do a restore.

By default, a database restore will purge all other entries in the
database. So if you're going to make a restore of the bacula catalog you'll
need to do it soon vs later OR maybe try to do some hybrid method where you
restore entries partially later, but I have no knowledge of how to do that
and I don't know how dangerous or foolish such a thing may be.

I assume right now you aren't doing many or any backups because your tape
drive is down (except for the backups made to disk that you're trying to
purge/prune because you accidentally backed up data that's too big for your
disk storage). However, once you start running more backups you
increasingly accumulate more catalog data you don't want to lose.

If catalog entries for volumes other than the disk volumes on your storage
were lost (backups made to tape via that client), your system might attempt
to automatically reuse tape volumes once your tape system is back online,
thereby overwriting backups you'd want to keep. Maybe watch out for that.

The data on your tape volumes is obviously still available, and worst case
you could do a bscan operation against those volumes once tape drive is
operational again. Would just have to be sure any impacted tapes aren't
automatically overwritten. A bscan operation would definitely take a while,
depending on how many tapes are impacted (if any).

A catalog restore is almost certainly what you want to do, presuming there
isn't a bunch of new catalog entries made after the accidental purge event.

To manually remove the disk volumes, and associated job and file entries,
do a delete operation. Easily done in Bacularis or baculum, (I haven't done
it in bconsole though I'm sure it's doable there too). I would have a
catalog backup restored BEFORE I performed the delete operation, and I
would be very careful to select the correct volumes to delete.

Running to a meeting, will check back later.

Robert Gerber
402-237-8692
r...@craeon.net

On Wed, May 15, 2024, 2:17 AM Stefan G. Weichinger  wrote:

> Am 08.05.24 um 13:13 schrieb Stefan G. Weichinger:
> >
> > I assume the drive has a problem.
>
> we used a new cleaning tape and a new LTO tape, btape still fails
>
> I tend to say this drive is defective
>
> Today the IT and the CEOs decide how to proceed.
>
> For now I created a temporary file-based storage on disk.
>
> As if it isn't dangerous enough right now I made a mistake:
>
> at first I had the huge veeam-files dumped to disk, I didn't want that.
> So I tried to delete jobs or volumes in bacularis, but that didn't
> remove the files on disk.
>
> googled and found some hint to "purge" ... now I purged all jobs from my
> single client even the good ones :-(
>
> Now I have volumes on disk with no jobs in the DB. I re-run the valid
> jobs right now, this creates new "virtual tapes" now, but I have to
> (manually?) remove all those orphaned volumes to free space.
>
>
>
>
>
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users
>
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Volume not being marked as Error in Catalog after S3/RGW timeout

2024-05-15 Thread Martin Reissner

Hello Ana,

thank you for the heads up. An upgrade to one of the more recent versions has 
been in my backlog for a while now, maybe this will get me some time to 
actually get it done.
I'd still like to know whether what I am seeing with the volume not being marked Error is 
an actual bug or something on my behalf but if the "Amazon" driver helps with 
the effects of the timeouts I'l gladly take it.

Regards,

Martin

On 14.05.24 20:53, Ana Emília M. Arruda wrote:

Hello Martin,

Do you think you can upgrade to 15.0.X? I would recommend you to use the "Amazon" driver, instead 
of the "S3" driver. You can simply change the "Driver" in the cloud resource, and restart 
the SD. I'm not sure the Amazon driver is available in 13.0.2, but you can have a try.

The Amazon driver is much more stable to such timeout issues.

Best regards,
Ana

On Mon, May 6, 2024 at 8:53 AM Martin Reissner mailto:mreiss...@wavecon.de>> wrote:

Hello,

by now I am mostly using our Ceph RGW with the S3 driver as storage and 
this works just fine but time and again requests towards the RGW time out.
This is of course our business and not Bacula's but due to a behaviour I 
can't understand this causes us more trouble than it should.

When one of these errors happens it looks like this in the logs:


04-Mai 02:32 mybackup-sd JobId 968544: Error:  S3_delete_object ERR=RequestTimeout CURL 
Effective URL: https://myrgw/mystorage/myvolume-25809/part.10 
 CURL OS Error: 101 CURL Effective 
URL: https://myrgw/mystorage/myvolume/part.10 
 CURL OS Error: 101
04-Mai 02:32 mybackup-sd JobId 968544: Fatal error: label.c:575 Truncate error on Cloud device 
"mydevice" (/opt/bacula/cloudcache): ERR= S3_delete_object ERR=RequestTimeout CURL 
Effective URL: https://myrgw/mystorage/myvolume/part.10 
 CURL OS Error: 101 CURL Effective URL: 
https://myrgw/mystorage/myvolume/part.10  CURL OS 
Error: 101
04-Mai 02:32 mybackup-sd JobId 968544: Marking Volume "myvolume" in Error 
in Catalog.
04-Mai 02:32 mybackup-sd JobId 968544: Fatal error: Job 968544 canceled.
04-Mai 02:32 mybackup-dir JobId 968544: Error: Bacula Enterprise 
wc-backup2-dir 13.0.2 (18Feb23):


However when I check the Volume status in the Catalog I see:


*list volume=myvolume

+-++---+-+--+--+--+-+--+---+---+-+--+---+
| MediaId | VolumeName                 | VolStatus | Enabled | VolBytes | 
VolFiles | VolRetention | Recycle | Slot | InChanger | MediaType | VolType | 
VolParts | ExpiresIn |

+-++---+-+--+--+--+-+--+---+---+-+--+---+
|  25,809 | myvolume                   | Recycle   |       1 |        1 |   
     0 |      691,200 |       1 |    0 |         0 | CloudType |      14 |      
 12 |         0 |

+-++---+-+--+--+--+-+--+---+---+-+--+---+


The VolStatus "Recycle" causes the Volume being used for subsequent Jobs 
which then all fail with errors like this:


05-Mai 02:31 mybackup-sd JobId 968789: Fatal error: cloud_dev.c:1322 Unable to download 
Volume="myvolume" label.  S3_get_object ERR=NoSuchKey CURL Effective URL: 
https://myrgw/mystorage/myvolume/part.1  CURL OS 
Error: 101 CURL Effective URL: https://myrgw/mystorage/myvolume/part.1 
 CURL OS Error: 101  BucketName : mystorage RequestId 
: xxx-default HostId : yyy-default
05-Mai 02:31 mybackup-sd JobId 968789: Marking Volume "myvolume" in Error 
in Catalog.


Am I wrong in expecting the Volume to actually be in VolStatus "Error" in 
the Catalog so other Jobs will not try to use it?

Would be grateful for any help as this is causing all backups using this 
storage to fail once one of the requests to the rgw times out until I manually 
mark the Volume as error or truncate the cloudcache for the Volume.

Regards,

Martin


-- 
Wavecon GmbH


Anschrift:      Thomas-Mann-Straße 16-20, 90471 Nürnberg
Website: www.wavecon.de 
Support: supp...@wavecon.de 

Telefon:        +49 (0)911-1206581 (werktags von 9 - 17 Uhr)
Hotline 24/7:   0800-WAVECON
Fax:            +49 (0)911-2129233

Registernummer: HBR Nürnberg 41590
GF:             Cemil Degirmenci
UstID:          DE251398082


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net 

Re: [Bacula-users] HP 1/8 G2 Autoloader

2024-05-15 Thread Stefan G. Weichinger

Am 08.05.24 um 13:13 schrieb Stefan G. Weichinger:


I assume the drive has a problem.


we used a new cleaning tape and a new LTO tape, btape still fails

I tend to say this drive is defective

Today the IT and the CEOs decide how to proceed.

For now I created a temporary file-based storage on disk.

As if it isn't dangerous enough right now I made a mistake:

at first I had the huge veeam-files dumped to disk, I didn't want that. 
So I tried to delete jobs or volumes in bacularis, but that didn't 
remove the files on disk.


googled and found some hint to "purge" ... now I purged all jobs from my 
single client even the good ones :-(


Now I have volumes on disk with no jobs in the DB. I re-run the valid 
jobs right now, this creates new "virtual tapes" now, but I have to 
(manually?) remove all those orphaned volumes to free space.






___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users