[ceph-users] Re: A couple OSDs not starting after host reboot

2024-04-11 Thread xu chenhui
Igor Fedotov wrote: > Hi chenhui, > > there is still a work in progress to support multiple labels to avoid > the issue (https://github.com/ceph/ceph/pull/55374). But this is of > little help for your current case. > > If your disk is fine (meaning it's able to read/write block at offset 0) >

[ceph-users] Re: A couple OSDs not starting after host reboot

2024-04-05 Thread xu chenhui
Hi, Igor Thank you for providing the repair procedure. I will try it when I am back to my workstation. Can you provide any possible reasons for this problem? ceph version: v16.2.5 error info: systemd[1]: Started Ceph osd.307 for 02eac9e0-d147-11ee-95de-f0b2b90ee048. bash[39068]: Running comman

[ceph-users] Re: A couple OSDs not starting after host reboot

2024-04-04 Thread xu chenhui
Hi, Has there been any progress on this issue ? is there quick recover method? I have same problem with you that first 4k block of osd metadata is invalid. It will pay a heavy price to recreate osd. Thanks. ___ ceph-users mailing list -- ceph-users

[ceph-users] Re: RGW Data Loss Bug in Octopus 15.2.0 through 15.2.6

2024-04-02 Thread xu chenhui
Jonas Nemeiksis wrote: > Hello, > > Maybe your issue depends to this https://tracker.ceph.com/issues/63642 > > > > On Wed, Mar 27, 2024 at 7:31 PM xu chenhui <xuchenhuig(a)gmail.com> > wrote: > > > Hi, Eric Ivancich > >I have similar pro

[ceph-users] Re: RGW Data Loss Bug in Octopus 15.2.0 through 15.2.6

2024-03-27 Thread xu chenhui
Hi, Eric Ivancich I have similar problem in ceph version 16.2.5. Has this problem been completely resolved in Pacific version? Our bucket has no lifecycle rules and no copy operation. This is a very serious data loss issue for us and It happens occasionally in our environment. Detail desc