We have had pgs get stuck in quincy (17.2.7). After changing to wpq,
no such problems were observed. We're using a replicated (x3) pool.
On 2024-05-02 10:02, Wesley Dillingham wrote:
In our case it was with a EC pool as well. I believe the PG state was
degraded+recovering / recovery_wait
Hi,
Under the circumstance that a ceph fs subvolume has to be recreated , the
uuid will change and we have to change all sources that reference the
volume path.
Is there a way to provide a label /tag to the volume path that can be used
for pv_root_path so that we do not have to
Hi,
I don't have a Reef production cluster available yet, only a small
test cluster (upgraded from 18.2.1 to 18.2.2 this week). Although I
don't use the RGWs constantly there, there are graphs in the ceph
dashboard. Maybe it's related to the grafana (and/or prometheus)
versions?
My
On 5/9/24 07:22, Xiubo Li wrote:
We are disscussing the same issue in slack thread
https://ceph-storage.slack.com/archives/C04LVQMHM9B/p1715189877518529.
Why is there a discussion about a bug off-list on a proprietary platform?
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str.