[ceph-users] Re: Hanging request in S3

2024-03-12 Thread Christian Kugler
Hi Casey, Interesting. Especially since the request it hangs on is a GET request. I set the option and restarted the RGW I test with. The POSTs for deleting take a while but there are not longer blocking GET or POST requests. Thank you! Best, Christian PS: Sorry for pressing the wrong reply

[ceph-users] Hanging request in S3

2024-03-06 Thread Christian Kugler
Hi, I am having some trouble with some S3 requests and I am at a loss. After upgrading to reef a couple of weeks ago some requests get stuck and never return. The two Ceph clusters are set up to sync the S3 realm bidirectionally. The bucket has 479 shards (dynamic resharding) at the moment.

[ceph-users] Re: Not all Bucket Shards being used

2023-08-02 Thread Christian Kugler
> Thank you for the information, Christian. When you reshard the bucket id is > updated (with most recent versions of ceph, a generation number is > incremented). The first bucket id matches the bucket marker, but after the > first reshard they diverge. This makes a lot of sense and explains

[ceph-users] Re: Not all Bucket Shards being used

2023-07-25 Thread Christian Kugler
Hi Eric, > 1. I recommend that you *not* issue another bucket reshard until you figure > out what’s going on. Thanks, noted! > 2. Which version of Ceph are you using? 17.2.5 I wanted to get the Cluster to Health OK before upgrading. I didn't see anything that led me to believe that an upgrade

[ceph-users] Not all Bucket Shards being used

2023-07-18 Thread Christian Kugler
something like 97. Or I could directly "downshard" to 97. Also, the second zone has a similar problem, but as the error messsage lets me know, this would be a bad idea. Will it just take more time until the sharding is transferred to the seconds zone? Best, Christian Kugler ___