mkay.
Sooo... what's the new and nifty proper way to clean this up?
The outsider's view is,
"I should just be able to run   'ceph orch osd rm 33'"

but that returns
Unable to find OSDs: ['33']


----- Original Message -----
From: "Stefan Kooman" <ste...@bit.nl>
To: "Philip Brown" <pbr...@medata.com>
Cc: "ceph-users" <ceph-users@ceph.io>
Sent: Thursday, March 18, 2021 10:09:28 PM
Subject: Re: [ceph-users] ceph octopus mysterious OSD crash

On 3/19/21 2:20 AM, Philip Brown wrote:
> yup cephadm and orch was used to set all this up.
> 
> Current state of things:
> 
> ceph osd tree shows
> 
>   33    hdd    1.84698              osd.33       destroyed         0  1.00000


^^ Destroyed, ehh, this doesn't look good to me. Ceph thinks this OSD is 
destroyed. Do you know what might have happened to osd.33? Did you 
perform a "kill an OSD" while testing?

AFAIK you can't fix that anymore. You will have to remove it and redploy 
it. Might even get a new osd.id.
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to