Hello Wes,

Thank you for your response.

brc1admin:~ # rados list-inconsistent-obj 15.f4f
No scrub information available for pg 15.f4f

brc1admin:~ # ceph osd ok-to-stop osd.238
OSD(s) 238 are ok to stop without reducing availability or risking data, 
provided there are no other concurrent failures or interventions.
341 PGs are likely to be degraded (but remain available) as a result.

Before I proceed with your suggested action plan, needed clarification on below.
In order to list all objects residing on the inconsistent PG, we had stopped 
the primary osd (osd.238) and extracted the list of all objects residing on 
this osd using ceph-objectstore tool. We notice that that when we stop the osd 
(osd.238) using systemctl, RGW gateways continuously restarts which is 
impacting our S3 service availability. This was observed twice when we stopped 
osd.238 for general maintenance activity w.r.t ceph-objectstore tool. How can 
we ensure that stopping and marking out osd.238 ( primary osd of inconsistent 
pg) does not impact RGW service availability ?
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to