Igor Fedotov wrote:
> Hi chenhui,
>
> there is still a work in progress to support multiple labels to avoid
> the issue (https://github.com/ceph/ceph/pull/55374). But this is of
> little help for your current case.
>
> If your disk is fine (meaning it's able to read/write block at offset 0)
>
Hi, Igor
Thank you for providing the repair procedure. I will try it when I am back to
my workstation. Can you provide any possible reasons for this problem?
ceph version: v16.2.5
error info:
systemd[1]: Started Ceph osd.307 for 02eac9e0-d147-11ee-95de-f0b2b90ee048.
bash[39068]: Running comman
Hi,
Has there been any progress on this issue ? is there quick recover method? I
have same problem with you that first 4k block of osd metadata is invalid. It
will pay a heavy price to recreate osd.
Thanks.
___
ceph-users mailing list -- ceph-users
Jonas Nemeiksis wrote:
> Hello,
>
> Maybe your issue depends to this https://tracker.ceph.com/issues/63642
>
>
>
> On Wed, Mar 27, 2024 at 7:31 PM xu chenhui <xuchenhuig(a)gmail.com>
> wrote:
>
> > Hi, Eric Ivancich
> >I have similar pro
Hi, Eric Ivancich
I have similar problem in ceph version 16.2.5. Has this problem been
completely resolved in Pacific version?
Our bucket has no lifecycle rules and no copy operation. This is a very serious
data loss issue for us and It happens occasionally in our environment.
Detail desc