Re: [Bacula-users] Volume not being marked as Error in Catalog after S3/RGW timeout

2024-06-14 Thread Ana Emília M . Arruda
Hello Martin,

Glad to hear that!
Thanks a lot for the feedback. :-)
You are very welcome!

Best regards,
Ana

On Fri, Jun 14, 2024 at 10:26 AM Martin Reissner 
wrote:

> Hello Ana,
>
> I wanted to report back that after almost two weeks on Bacula 15.0.2 using
> the "Amazon" driver, the situation has improved a lot. We have seen no more
> backups failing because of timeouts. I had some errors with Copyjobs that
> are
> reading from S3/RGW and writing to file storage but they might have been
> caused by something else and it was only few errors. Thank you for the
> heads up with the "Amazon" driver!
>
> All the best,
>
> Martin
>
> On 16.05.24 17:58, Ana Emília M. Arruda wrote:
> > Hello Martin,
> >
> > Yes, the Amazon driver will help with the timeout issues. And we have
> been improving the Amazon driver continuously. Thus, I would move to this
> one. If you still see this issue related to the volume not marked as error,
> then we should investigate it.
> >
> > Best,
> > Ana
> >
> > On Wed, May 15, 2024 at 12:04 PM Martin Reissner  > wrote:
> >
> > Hello Ana,
> >
> > thank you for the heads up. An upgrade to one of the more recent
> versions has been in my backlog for a while now, maybe this will get me
> some time to actually get it done.
> > I'd still like to know whether what I am seeing with the volume not
> being marked Error is an actual bug or something on my behalf but if the
> "Amazon" driver helps with the effects of the timeouts I'l gladly take it.
> >
> > Regards,
> >
> > Martin
> >
> > On 14.05.24 20:53, Ana Emília M. Arruda wrote:
> >  > Hello Martin,
> >  >
> >  > Do you think you can upgrade to 15.0.X? I would recommend you to
> use the "Amazon" driver, instead of the "S3" driver. You can simply change
> the "Driver" in the cloud resource, and restart the SD. I'm not sure the
> Amazon driver is available in 13.0.2, but you can have a try.
> >  >
> >  > The Amazon driver is much more stable to such timeout issues.
> >  >
> >  > Best regards,
> >  > Ana
> >  >
> >  > On Mon, May 6, 2024 at 8:53 AM Martin Reissner <
> mreiss...@wavecon.de   mreiss...@wavecon.de >> wrote:
> >  >
> >  > Hello,
> >  >
> >  > by now I am mostly using our Ceph RGW with the S3 driver as
> storage and this works just fine but time and again requests towards the
> RGW time out.
> >  > This is of course our business and not Bacula's but due to a
> behaviour I can't understand this causes us more trouble than it should.
> >  >
> >  > When one of these errors happens it looks like this in the
> logs:
> >  >
> >  >
> >  > 04-Mai 02:32 mybackup-sd JobId 968544: Error:
> S3_delete_object ERR=RequestTimeout CURL Effective URL:
> https://myrgw/mystorage/myvolume-25809/part.10 <
> https://myrgw/mystorage/myvolume-25809/part.10> <
> https://myrgw/mystorage/myvolume-25809/part.10 <
> https://myrgw/mystorage/myvolume-25809/part.10>> CURL OS Error: 101 CURL
> Effective URL: https://myrgw/mystorage/myvolume/part.10 <
> https://myrgw/mystorage/myvolume/part.10> <
> https://myrgw/mystorage/myvolume/part.10 <
> https://myrgw/mystorage/myvolume/part.10>> CURL OS Error: 101
> >  > 04-Mai 02:32 mybackup-sd JobId 968544: Fatal error:
> label.c:575 Truncate error on Cloud device "mydevice"
> (/opt/bacula/cloudcache): ERR= S3_delete_object ERR=RequestTimeout CURL
> Effective URL: https://myrgw/mystorage/myvolume/part.10 <
> https://myrgw/mystorage/myvolume/part.10> <
> https://myrgw/mystorage/myvolume/part.10 <
> https://myrgw/mystorage/myvolume/part.10>> CURL OS Error: 101 CURL
> Effective URL: https://myrgw/mystorage/myvolume/part.10 <
> https://myrgw/mystorage/myvolume/part.10> <
> https://myrgw/mystorage/myvolume/part.10 <
> https://myrgw/mystorage/myvolume/part.10>> CURL OS Error: 101
> >  > 04-Mai 02:32 mybackup-sd JobId 968544: Marking Volume
> "myvolume" in Error in Catalog.
> >  > 04-Mai 02:32 mybackup-sd JobId 968544: Fatal error: Job
> 968544 canceled.
> >  > 04-Mai 02:32 mybackup-dir JobId 968544: Error: Bacula
> Enterprise wc-backup2-dir 13.0.2 (18Feb23):
> >  >
> >  >
> >  > However when I check the Volume status in the Catalog I see:
> >  >
> >  >
> >  > *list volume=myvolume
> >  >
>  
> +-++---+-+--+--+--+-+--+---+---+-+--+---+
> >  > | MediaId | VolumeName | VolStatus | Enabled
> | VolBytes | VolFiles | VolRetention | Recycle | Slot | InChanger |
> MediaType | VolType | VolParts | ExpiresIn |
> >  >
>  
> +-++---+-+--+--+--+-+--+---+---+-+--+---+
> >  > 

Re: [Bacula-users] Volume not being marked as Error in Catalog after S3/RGW timeout

2024-06-14 Thread Martin Reissner

Hello Ana,

I wanted to report back that after almost two weeks on Bacula 15.0.2 using the 
"Amazon" driver, the situation has improved a lot. We have seen no more backups 
failing because of timeouts. I had some errors with Copyjobs that are
reading from S3/RGW and writing to file storage but they might have been caused by 
something else and it was only few errors. Thank you for the heads up with the 
"Amazon" driver!

All the best,

Martin

On 16.05.24 17:58, Ana Emília M. Arruda wrote:

Hello Martin,

Yes, the Amazon driver will help with the timeout issues. And we have been 
improving the Amazon driver continuously. Thus, I would move to this one. If 
you still see this issue related to the volume not marked as error, then we 
should investigate it.

Best,
Ana

On Wed, May 15, 2024 at 12:04 PM Martin Reissner mailto:mreiss...@wavecon.de>> wrote:

Hello Ana,

thank you for the heads up. An upgrade to one of the more recent versions 
has been in my backlog for a while now, maybe this will get me some time to 
actually get it done.
I'd still like to know whether what I am seeing with the volume not being marked 
Error is an actual bug or something on my behalf but if the "Amazon" driver 
helps with the effects of the timeouts I'l gladly take it.

Regards,

Martin

On 14.05.24 20:53, Ana Emília M. Arruda wrote:
 > Hello Martin,
 >
 > Do you think you can upgrade to 15.0.X? I would recommend you to use the "Amazon" driver, 
instead of the "S3" driver. You can simply change the "Driver" in the cloud resource, and 
restart the SD. I'm not sure the Amazon driver is available in 13.0.2, but you can have a try.
 >
 > The Amazon driver is much more stable to such timeout issues.
 >
 > Best regards,
 > Ana
 >
 > On Mon, May 6, 2024 at 8:53 AM Martin Reissner mailto:mreiss...@wavecon.de> >> wrote:
 >
 >     Hello,
 >
 >     by now I am mostly using our Ceph RGW with the S3 driver as storage 
and this works just fine but time and again requests towards the RGW time out.
 >     This is of course our business and not Bacula's but due to a 
behaviour I can't understand this causes us more trouble than it should.
 >
 >     When one of these errors happens it looks like this in the logs:
 >
 >
 >     04-Mai 02:32 mybackup-sd JobId 968544: Error:  S3_delete_object ERR=RequestTimeout CURL Effective URL: 
https://myrgw/mystorage/myvolume-25809/part.10  
> CURL OS 
Error: 101 CURL Effective URL: https://myrgw/mystorage/myvolume/part.10 
 > CURL OS Error: 101
 >     04-Mai 02:32 mybackup-sd JobId 968544: Fatal error: label.c:575 Truncate error on Cloud device 
"mydevice" (/opt/bacula/cloudcache): ERR= S3_delete_object ERR=RequestTimeout CURL Effective URL: 
https://myrgw/mystorage/myvolume/part.10  
> CURL OS Error: 101 CURL 
Effective URL: https://myrgw/mystorage/myvolume/part.10  
> CURL OS Error: 101
 >     04-Mai 02:32 mybackup-sd JobId 968544: Marking Volume "myvolume" in 
Error in Catalog.
 >     04-Mai 02:32 mybackup-sd JobId 968544: Fatal error: Job 968544 
canceled.
 >     04-Mai 02:32 mybackup-dir JobId 968544: Error: Bacula Enterprise 
wc-backup2-dir 13.0.2 (18Feb23):
 >
 >
 >     However when I check the Volume status in the Catalog I see:
 >
 >
 >     *list volume=myvolume
 >     
+-++---+-+--+--+--+-+--+---+---+-+--+---+
 >     | MediaId | VolumeName                 | VolStatus | Enabled | 
VolBytes | VolFiles | VolRetention | Recycle | Slot | InChanger | MediaType | 
VolType | VolParts | ExpiresIn |
 >     
+-++---+-+--+--+--+-+--+---+---+-+--+---+
 >     |  25,809 | myvolume                   | Recycle   |       1 |       
 1 |        0 |      691,200 |       1 |    0 |         0 | CloudType |      14 |  
     12 |         0 |
 >     
+-++---+-+--+--+--+-+--+---+---+-+--+---+
 >
 >
 >     The VolStatus "Recycle" causes the Volume being used for subsequent 
Jobs which then all 

Re: [Bacula-users] Volume not being marked as Error in Catalog after S3/RGW timeout

2024-05-16 Thread Ana Emília M . Arruda
Hello Martin,

Yes, the Amazon driver will help with the timeout issues. And we have been
improving the Amazon driver continuously. Thus, I would move to this one.
If you still see this issue related to the volume not marked as error, then
we should investigate it.

Best,
Ana

On Wed, May 15, 2024 at 12:04 PM Martin Reissner 
wrote:

> Hello Ana,
>
> thank you for the heads up. An upgrade to one of the more recent versions
> has been in my backlog for a while now, maybe this will get me some time to
> actually get it done.
> I'd still like to know whether what I am seeing with the volume not being
> marked Error is an actual bug or something on my behalf but if the "Amazon"
> driver helps with the effects of the timeouts I'l gladly take it.
>
> Regards,
>
> Martin
>
> On 14.05.24 20:53, Ana Emília M. Arruda wrote:
> > Hello Martin,
> >
> > Do you think you can upgrade to 15.0.X? I would recommend you to use the
> "Amazon" driver, instead of the "S3" driver. You can simply change the
> "Driver" in the cloud resource, and restart the SD. I'm not sure the Amazon
> driver is available in 13.0.2, but you can have a try.
> >
> > The Amazon driver is much more stable to such timeout issues.
> >
> > Best regards,
> > Ana
> >
> > On Mon, May 6, 2024 at 8:53 AM Martin Reissner  > wrote:
> >
> > Hello,
> >
> > by now I am mostly using our Ceph RGW with the S3 driver as storage
> and this works just fine but time and again requests towards the RGW time
> out.
> > This is of course our business and not Bacula's but due to a
> behaviour I can't understand this causes us more trouble than it should.
> >
> > When one of these errors happens it looks like this in the logs:
> >
> >
> > 04-Mai 02:32 mybackup-sd JobId 968544: Error:  S3_delete_object
> ERR=RequestTimeout CURL Effective URL:
> https://myrgw/mystorage/myvolume-25809/part.10 <
> https://myrgw/mystorage/myvolume-25809/part.10> CURL OS Error: 101 CURL
> Effective URL: https://myrgw/mystorage/myvolume/part.10 <
> https://myrgw/mystorage/myvolume/part.10> CURL OS Error: 101
> > 04-Mai 02:32 mybackup-sd JobId 968544: Fatal error: label.c:575
> Truncate error on Cloud device "mydevice" (/opt/bacula/cloudcache): ERR=
> S3_delete_object ERR=RequestTimeout CURL Effective URL:
> https://myrgw/mystorage/myvolume/part.10 <
> https://myrgw/mystorage/myvolume/part.10> CURL OS Error: 101 CURL
> Effective URL: https://myrgw/mystorage/myvolume/part.10 <
> https://myrgw/mystorage/myvolume/part.10> CURL OS Error: 101
> > 04-Mai 02:32 mybackup-sd JobId 968544: Marking Volume "myvolume" in
> Error in Catalog.
> > 04-Mai 02:32 mybackup-sd JobId 968544: Fatal error: Job 968544
> canceled.
> > 04-Mai 02:32 mybackup-dir JobId 968544: Error: Bacula Enterprise
> wc-backup2-dir 13.0.2 (18Feb23):
> >
> >
> > However when I check the Volume status in the Catalog I see:
> >
> >
> > *list volume=myvolume
> >
>  
> +-++---+-+--+--+--+-+--+---+---+-+--+---+
> > | MediaId | VolumeName | VolStatus | Enabled |
> VolBytes | VolFiles | VolRetention | Recycle | Slot | InChanger | MediaType
> | VolType | VolParts | ExpiresIn |
> >
>  
> +-++---+-+--+--+--+-+--+---+---+-+--+---+
> > |  25,809 | myvolume   | Recycle   |   1 |
>   1 |0 |  691,200 |   1 |0 | 0 | CloudType |
>   14 |   12 | 0 |
> >
>  
> +-++---+-+--+--+--+-+--+---+---+-+--+---+
> >
> >
> > The VolStatus "Recycle" causes the Volume being used for subsequent
> Jobs which then all fail with errors like this:
> >
> >
> > 05-Mai 02:31 mybackup-sd JobId 968789: Fatal error: cloud_dev.c:1322
> Unable to download Volume="myvolume" label.  S3_get_object ERR=NoSuchKey
> CURL Effective URL: https://myrgw/mystorage/myvolume/part.1 <
> https://myrgw/mystorage/myvolume/part.1> CURL OS Error: 101 CURL
> Effective URL: https://myrgw/mystorage/myvolume/part.1 <
> https://myrgw/mystorage/myvolume/part.1> CURL OS Error: 101  BucketName :
> mystorage RequestId : xxx-default HostId : yyy-default
> > 05-Mai 02:31 mybackup-sd JobId 968789: Marking Volume "myvolume" in
> Error in Catalog.
> >
> >
> > Am I wrong in expecting the Volume to actually be in VolStatus
> "Error" in the Catalog so other Jobs will not try to use it?
> >
> > Would be grateful for any help as this is causing all backups using
> this storage to fail once one of the requests to the rgw times out until I
> manually mark the Volume as error or truncate the cloudcache for the Volume.
> >
> > Regards,
> >
> > Martin
> >
> >
> 

Re: [Bacula-users] Volume not being marked as Error in Catalog after S3/RGW timeout

2024-05-15 Thread Martin Reissner

Hello Ana,

thank you for the heads up. An upgrade to one of the more recent versions has 
been in my backlog for a while now, maybe this will get me some time to 
actually get it done.
I'd still like to know whether what I am seeing with the volume not being marked Error is 
an actual bug or something on my behalf but if the "Amazon" driver helps with 
the effects of the timeouts I'l gladly take it.

Regards,

Martin

On 14.05.24 20:53, Ana Emília M. Arruda wrote:

Hello Martin,

Do you think you can upgrade to 15.0.X? I would recommend you to use the "Amazon" driver, instead 
of the "S3" driver. You can simply change the "Driver" in the cloud resource, and restart 
the SD. I'm not sure the Amazon driver is available in 13.0.2, but you can have a try.

The Amazon driver is much more stable to such timeout issues.

Best regards,
Ana

On Mon, May 6, 2024 at 8:53 AM Martin Reissner mailto:mreiss...@wavecon.de>> wrote:

Hello,

by now I am mostly using our Ceph RGW with the S3 driver as storage and 
this works just fine but time and again requests towards the RGW time out.
This is of course our business and not Bacula's but due to a behaviour I 
can't understand this causes us more trouble than it should.

When one of these errors happens it looks like this in the logs:


04-Mai 02:32 mybackup-sd JobId 968544: Error:  S3_delete_object ERR=RequestTimeout CURL 
Effective URL: https://myrgw/mystorage/myvolume-25809/part.10 
 CURL OS Error: 101 CURL Effective 
URL: https://myrgw/mystorage/myvolume/part.10 
 CURL OS Error: 101
04-Mai 02:32 mybackup-sd JobId 968544: Fatal error: label.c:575 Truncate error on Cloud device 
"mydevice" (/opt/bacula/cloudcache): ERR= S3_delete_object ERR=RequestTimeout CURL 
Effective URL: https://myrgw/mystorage/myvolume/part.10 
 CURL OS Error: 101 CURL Effective URL: 
https://myrgw/mystorage/myvolume/part.10  CURL OS 
Error: 101
04-Mai 02:32 mybackup-sd JobId 968544: Marking Volume "myvolume" in Error 
in Catalog.
04-Mai 02:32 mybackup-sd JobId 968544: Fatal error: Job 968544 canceled.
04-Mai 02:32 mybackup-dir JobId 968544: Error: Bacula Enterprise 
wc-backup2-dir 13.0.2 (18Feb23):


However when I check the Volume status in the Catalog I see:


*list volume=myvolume

+-++---+-+--+--+--+-+--+---+---+-+--+---+
| MediaId | VolumeName                 | VolStatus | Enabled | VolBytes | 
VolFiles | VolRetention | Recycle | Slot | InChanger | MediaType | VolType | 
VolParts | ExpiresIn |

+-++---+-+--+--+--+-+--+---+---+-+--+---+
|  25,809 | myvolume                   | Recycle   |       1 |        1 |   
     0 |      691,200 |       1 |    0 |         0 | CloudType |      14 |      
 12 |         0 |

+-++---+-+--+--+--+-+--+---+---+-+--+---+


The VolStatus "Recycle" causes the Volume being used for subsequent Jobs 
which then all fail with errors like this:


05-Mai 02:31 mybackup-sd JobId 968789: Fatal error: cloud_dev.c:1322 Unable to download 
Volume="myvolume" label.  S3_get_object ERR=NoSuchKey CURL Effective URL: 
https://myrgw/mystorage/myvolume/part.1  CURL OS 
Error: 101 CURL Effective URL: https://myrgw/mystorage/myvolume/part.1 
 CURL OS Error: 101  BucketName : mystorage RequestId 
: xxx-default HostId : yyy-default
05-Mai 02:31 mybackup-sd JobId 968789: Marking Volume "myvolume" in Error 
in Catalog.


Am I wrong in expecting the Volume to actually be in VolStatus "Error" in 
the Catalog so other Jobs will not try to use it?

Would be grateful for any help as this is causing all backups using this 
storage to fail once one of the requests to the rgw times out until I manually 
mark the Volume as error or truncate the cloudcache for the Volume.

Regards,

Martin


-- 
Wavecon GmbH


Anschrift:      Thomas-Mann-Straße 16-20, 90471 Nürnberg
Website: www.wavecon.de 
Support: supp...@wavecon.de 

Telefon:        +49 (0)911-1206581 (werktags von 9 - 17 Uhr)
Hotline 24/7:   0800-WAVECON
Fax:            +49 (0)911-2129233

Registernummer: HBR Nürnberg 41590
GF:             Cemil Degirmenci
UstID:          DE251398082


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net 

Re: [Bacula-users] Volume not being marked as Error in Catalog after S3/RGW timeout

2024-05-14 Thread Ana Emília M . Arruda
Hello Martin,

Do you think you can upgrade to 15.0.X? I would recommend you to use the
"Amazon" driver, instead of the "S3" driver. You can simply change the
"Driver" in the cloud resource, and restart the SD. I'm not sure the Amazon
driver is available in 13.0.2, but you can have a try.

The Amazon driver is much more stable to such timeout issues.

Best regards,
Ana

On Mon, May 6, 2024 at 8:53 AM Martin Reissner  wrote:

> Hello,
>
> by now I am mostly using our Ceph RGW with the S3 driver as storage and
> this works just fine but time and again requests towards the RGW time out.
> This is of course our business and not Bacula's but due to a behaviour I
> can't understand this causes us more trouble than it should.
>
> When one of these errors happens it looks like this in the logs:
>
>
> 04-Mai 02:32 mybackup-sd JobId 968544: Error:  S3_delete_object
> ERR=RequestTimeout CURL Effective URL:
> https://myrgw/mystorage/myvolume-25809/part.10 CURL OS Error: 101 CURL
> Effective URL: https://myrgw/mystorage/myvolume/part.10 CURL OS Error: 101
> 04-Mai 02:32 mybackup-sd JobId 968544: Fatal error: label.c:575 Truncate
> error on Cloud device "mydevice" (/opt/bacula/cloudcache): ERR=
> S3_delete_object ERR=RequestTimeout CURL Effective URL:
> https://myrgw/mystorage/myvolume/part.10 CURL OS Error: 101 CURL
> Effective URL: https://myrgw/mystorage/myvolume/part.10 CURL OS Error: 101
> 04-Mai 02:32 mybackup-sd JobId 968544: Marking Volume "myvolume" in Error
> in Catalog.
> 04-Mai 02:32 mybackup-sd JobId 968544: Fatal error: Job 968544 canceled.
> 04-Mai 02:32 mybackup-dir JobId 968544: Error: Bacula Enterprise
> wc-backup2-dir 13.0.2 (18Feb23):
>
>
> However when I check the Volume status in the Catalog I see:
>
>
> *list volume=myvolume
>
> +-++---+-+--+--+--+-+--+---+---+-+--+---+
> | MediaId | VolumeName | VolStatus | Enabled | VolBytes |
> VolFiles | VolRetention | Recycle | Slot | InChanger | MediaType | VolType
> | VolParts | ExpiresIn |
>
> +-++---+-+--+--+--+-+--+---+---+-+--+---+
> |  25,809 | myvolume   | Recycle   |   1 |1 |
>   0 |  691,200 |   1 |0 | 0 | CloudType |  14
> |   12 | 0 |
>
> +-++---+-+--+--+--+-+--+---+---+-+--+---+
>
>
> The VolStatus "Recycle" causes the Volume being used for subsequent Jobs
> which then all fail with errors like this:
>
>
> 05-Mai 02:31 mybackup-sd JobId 968789: Fatal error: cloud_dev.c:1322
> Unable to download Volume="myvolume" label.  S3_get_object ERR=NoSuchKey
> CURL Effective URL: https://myrgw/mystorage/myvolume/part.1 CURL OS
> Error: 101 CURL Effective URL: https://myrgw/mystorage/myvolume/part.1
> CURL OS Error: 101  BucketName : mystorage RequestId : xxx-default HostId :
> yyy-default
> 05-Mai 02:31 mybackup-sd JobId 968789: Marking Volume "myvolume" in Error
> in Catalog.
>
>
> Am I wrong in expecting the Volume to actually be in VolStatus "Error" in
> the Catalog so other Jobs will not try to use it?
>
> Would be grateful for any help as this is causing all backups using this
> storage to fail once one of the requests to the rgw times out until I
> manually mark the Volume as error or truncate the cloudcache for the Volume.
>
> Regards,
>
> Martin
>
>
> --
> Wavecon GmbH
>
> Anschrift:  Thomas-Mann-Straße 16-20, 90471 Nürnberg
> Website:www.wavecon.de
> Support:supp...@wavecon.de
>
> Telefon:+49 (0)911-1206581 (werktags von 9 - 17 Uhr)
> Hotline 24/7:   0800-WAVECON
> Fax:+49 (0)911-2129233
>
> Registernummer: HBR Nürnberg 41590
> GF: Cemil Degirmenci
> UstID:  DE251398082
>
>
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users
>
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Volume not being marked as Error in Catalog after S3/RGW timeout

2024-05-06 Thread Martin Reissner

Hello,

by now I am mostly using our Ceph RGW with the S3 driver as storage and this 
works just fine but time and again requests towards the RGW time out.
This is of course our business and not Bacula's but due to a behaviour I can't 
understand this causes us more trouble than it should.

When one of these errors happens it looks like this in the logs:


04-Mai 02:32 mybackup-sd JobId 968544: Error:  S3_delete_object 
ERR=RequestTimeout CURL Effective URL: 
https://myrgw/mystorage/myvolume-25809/part.10 CURL OS Error: 101 CURL 
Effective URL: https://myrgw/mystorage/myvolume/part.10 CURL OS Error: 101
04-Mai 02:32 mybackup-sd JobId 968544: Fatal error: label.c:575 Truncate error on Cloud 
device "mydevice" (/opt/bacula/cloudcache): ERR= S3_delete_object 
ERR=RequestTimeout CURL Effective URL: https://myrgw/mystorage/myvolume/part.10 CURL OS 
Error: 101 CURL Effective URL: https://myrgw/mystorage/myvolume/part.10 CURL OS Error: 101
04-Mai 02:32 mybackup-sd JobId 968544: Marking Volume "myvolume" in Error in 
Catalog.
04-Mai 02:32 mybackup-sd JobId 968544: Fatal error: Job 968544 canceled.
04-Mai 02:32 mybackup-dir JobId 968544: Error: Bacula Enterprise wc-backup2-dir 
13.0.2 (18Feb23):


However when I check the Volume status in the Catalog I see:


*list volume=myvolume
+-++---+-+--+--+--+-+--+---+---+-+--+---+
| MediaId | VolumeName | VolStatus | Enabled | VolBytes | 
VolFiles | VolRetention | Recycle | Slot | InChanger | MediaType | VolType | 
VolParts | ExpiresIn |
+-++---+-+--+--+--+-+--+---+---+-+--+---+
|  25,809 | myvolume   | Recycle   |   1 |1 |   
 0 |  691,200 |   1 |0 | 0 | CloudType |  14 |   12 
| 0 |
+-++---+-+--+--+--+-+--+---+---+-+--+---+


The VolStatus "Recycle" causes the Volume being used for subsequent Jobs which 
then all fail with errors like this:


05-Mai 02:31 mybackup-sd JobId 968789: Fatal error: cloud_dev.c:1322 Unable to download 
Volume="myvolume" label.  S3_get_object ERR=NoSuchKey CURL Effective URL: 
https://myrgw/mystorage/myvolume/part.1 CURL OS Error: 101 CURL Effective URL: 
https://myrgw/mystorage/myvolume/part.1 CURL OS Error: 101  BucketName : mystorage 
RequestId : xxx-default HostId : yyy-default
05-Mai 02:31 mybackup-sd JobId 968789: Marking Volume "myvolume" in Error in 
Catalog.


Am I wrong in expecting the Volume to actually be in VolStatus "Error" in the 
Catalog so other Jobs will not try to use it?

Would be grateful for any help as this is causing all backups using this 
storage to fail once one of the requests to the rgw times out until I manually 
mark the Volume as error or truncate the cloudcache for the Volume.

Regards,

Martin


--
Wavecon GmbH
 
Anschrift:  Thomas-Mann-Straße 16-20, 90471 Nürnberg

Website:www.wavecon.de
Support:supp...@wavecon.de
 
Telefon:+49 (0)911-1206581 (werktags von 9 - 17 Uhr)

Hotline 24/7:   0800-WAVECON
Fax:+49 (0)911-2129233
 
Registernummer: HBR Nürnberg 41590

GF: Cemil Degirmenci
UstID:  DE251398082


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users