I set osd.7 as "in", uncordened the node, scaled the OSD deployment back up
and things are recovering with cluster status HEALTH_OK.
I found this message from the archives:
https://www.mail-archive.com/ceph-users@lists.ceph.com/msg47071.html
"You have a large difference in the capacities of the
Hi,
Yes - you have to recreate the osds.
Hth
Mehmet
Am 20. November 2021 08:40:33 MEZ schrieb "norman.kern" :
>Hi Anthony,
>
>Thanks for your reply. If the SSD down, Do I have to rebuild the 3-4
>OSDs and balance the data in the OSD?
>
>在 2021/11/20 下午2:27, Anthony D'Atri 写道:
>>
>>> On Nov 19,
Ah, looks like ceph orch rm osd.dashboard-admin-1603406157941 actually did
the trick despite the docs saying it would only work for a group of
stateless services.
Op zo 21 nov. 2021 om 13:31 schreef Tinco Andringa :
> Thank you Weiwen Hu, your idea pointed me towards looking into the
>
Thank you Weiwen Hu, your idea pointed me towards looking into the
orchestrator. There I discovered there's a bunch of osd services and it's a
bit of a mess:|
*tinco@automator-1*:*~*$ sudo ceph orch ls osd
NAME RUNNING REFRESHED AGE PLACEMENTIMAGE
NAME