he same value on all OSDs probably 1
So now after a noticable downtime of kubernetes and having to recreate
most persistent volumes on the cluster Ceph health is HEALTH_OK again.
I could upgrade to Ceph 16.2.13.
I hope I can now upgrade to 17.2.6 without issues.
Best regards
--
Mag. Ing.
t only has
HDDs to one with SSDs ruining redundancy for a short while and hoping
for the best
Am 26.04.2023 um 02:28 schrieb A Asraoui:
Omar, glad to see cephfs with kubernetes up and running.. did you guys
use rook to deploy this ??
Abdelillah
On Mon, Apr 24, 2023 at 6:56 AM Omar Siam wrot
' is nearfull
pool 'default.rgw.buckets.index' is nearfull
pool 'default.rgw.buckets.non-ec' is nearfull
pool 'default.rgw.buckets.data' is nearfull
(near full is set to 0.66)
--
Mag. Ing. Omar Siam
Austrian Center for Digital Humanities and Cultural Heritage
Österreichische Akademie der W