.750532+
pg 3.c not scrubbed since 2023-11-15T18:47:44.742320+
pg 3.27 not scrubbed since 2023-11-15T21:09:57.747494+
pg 3.2a not scrubbed since 2023-11-15T18:01:21.875230+
[WRN] POOL_NEARFULL: 3 pool(s) nearfull
pool '.mgr' is nearfull
pool 'cephfs.sto
1-15T18:47:44.742320+
pg 3.27 not scrubbed since 2023-11-15T21:09:57.747494+
pg 3.2a not scrubbed since 2023-11-15T18:01:21.875230+
[WRN] POOL_NEARFULL: 3 pool(s) nearfull
pool '.mgr' is nearfull
pool 'cephfs.storage.meta' is nearfull
pool 'cephfs
Hi,
I guess you could just redeploy the third MON which fails to start
(after the orchestrator is responding again) unless you figured it out
already. What is it logging?
1 osds exist in the crush map but not in the osdmap
This could be due to the input/output error, but it's just a gues