Hi Ansgar,

To clarify the messaging or docs, could you say where you learned that
you should enable the bluestore_fsck_quick_fix_on_mount setting? Is
that documented somewhere, or did you have it enabled from previously?
The default is false so the corruption only occurs when users actively
choose to fsck.

As to recovery, Igor wrote the low level details here:
https://www.spinics.net/lists/ceph-users/msg69338.html
How did you resolve the omap issues in your rgw.index pool? What type
of issues remain in meta and log?

Cheers, Dan


On Tue, Nov 9, 2021 at 7:36 AM Ansgar Jazdzewski
<a.jazdzew...@googlemail.com> wrote:
>
> Hi fellow ceph users,
>
> I did an upgrade from 14.2.23 to 16.2.6 not knowing that the current
> minor version had this nasty bug! [1] [2]
>
> we were able to resolve some of the omap issues in the rgw.index pool
> but still have 17pg's to fix in the rgw.meta and rgw.log pool!
>
> I have a couple of questions:
> - did someone have done a script to fix that pg's we were only able to
> fix the index with our approach [3]
> - why is the 16.2.6 version still in the public mirror (should it not be 
> moved)
> - do you have any other workarounds to resolve this?
>
> thanks for your help!
> Ansgar
>
> 1) https://docs.ceph.com/en/latest/releases/pacific/
> 2) https://tracker.ceph.com/issues/53062
> 3) https://paste.openstack.org/show/810861
> _______________________________________________
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to