or set it to the same value on all OSDs probably 1
So now after a noticable downtime of kubernetes and having to recreate
most persistent volumes on the cluster Ceph health is HEALTH_OK again.
I could upgrade to Ceph 16.2.13.
I hope I can now upgrade to 17.2.6 without issues.
Best regards
--
l host that only has
HDDs to one with SSDs ruining redundancy for a short while and hoping
for the best
Am 26.04.2023 um 02:28 schrieb A Asraoui:
Omar, glad to see cephfs with kubernetes up and running.. did you guys
use rook to deploy this ??
Abdelillah
On Mon, Apr 24, 2023 at 6:56 AM Omar
s nearfull
pool 'default.rgw.control' is nearfull
pool 'default.rgw.meta' is nearfull
pool 'default.rgw.buckets.index' is nearfull
pool 'default.rgw.buckets.non-ec' is nearfull
pool 'default.rgw.buckets.data' is nearfull
(near full i