Hi, After replacing 2 OSD (data corruption), this is the stats of my testing ceph cluster
ceph pg stat 498 pgs: 37 peering, 1 active+remapped+backfilling, 1 active+clean+remapped, 1 active+recovery_wait+undersized+remapped, 1 backfill_unfound+undersized+degraded+remapped+peered, 1 remapped+peering, 12 active+clean+scrubbing+deep, 1 active+undersized, 442 active+clean, 1 active+recovering+undersized+remapped 34 GiB data, 175 GiB used, 6.2 TiB / 6.4 TiB avail; 1.7 KiB/s rd, 1 op/s; 31/39768 objects degraded (0.078%); 6/39768 objects misplaced (0.015%); 1/13256 objects unfound (0.008%) ceph osd stat 7 osds: 7 up (since 20h), 7 in (since 20h); epoch: e427538; 4 remapped pgs Anyone had an idea of where to start to get a healthy cluster ? Thanks ! Vivien _______________________________________________ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io