[ceph-users] Re: After hardware failure tried to recover ceph and followed instructions for recovery using OSDS

2023-12-05 Thread Eugen Block
.750532+ pg 3.c not scrubbed since 2023-11-15T18:47:44.742320+ pg 3.27 not scrubbed since 2023-11-15T21:09:57.747494+ pg 3.2a not scrubbed since 2023-11-15T18:01:21.875230+ [WRN] POOL_NEARFULL: 3 pool(s) nearfull pool '.mgr' is nearfull pool 'cephfs.sto

[ceph-users] Re: After hardware failure tried to recover ceph and followed instructions for recovery using OSDS

2023-12-05 Thread Manolis Daramas
1-15T18:47:44.742320+ pg 3.27 not scrubbed since 2023-11-15T21:09:57.747494+ pg 3.2a not scrubbed since 2023-11-15T18:01:21.875230+ [WRN] POOL_NEARFULL: 3 pool(s) nearfull pool '.mgr' is nearfull pool 'cephfs.storage.meta' is nearfull pool 'cephfs

[ceph-users] Re: After hardware failure tried to recover ceph and followed instructions for recovery using OSDS

2023-11-21 Thread Eugen Block
Hi, I guess you could just redeploy the third MON which fails to start (after the orchestrator is responding again) unless you figured it out already. What is it logging? 1 osds exist in the crush map but not in the osdmap This could be due to the input/output error, but it's just a gues