[ceph-users] Remove failed OSD

2024-05-03 Thread Zakhar Kirpichenko
Hi! An OSD failed in our 16.2.15 cluster. I prepared it for removal and ran `ceph orch daemon rm osd.19 --force`. Somehow that didn't work as expected, so now we still have osd.19 in the crush map: -10 122.66965 host ceph02 19 1.0 osd.19

[ceph-users] Re: 16.2.14: [progress WARNING root] complete: ev {UUID} does not exist

2024-05-03 Thread Prashant Dhange
These unfound progress events are from the cephadm module. More details are in https://tracker.ceph.com/issues/65799 On Fri, Sep 29, 2023 at 7:42 AM Zakhar Kirpichenko wrote: > Many thanks for the clarification! > > /Z > > On Fri, 29 Sept 2023 at 16:43, Tyler Stachecki > wrote: > > > > > > >

[ceph-users] Re: Reset health.

2024-05-03 Thread ceph
Ceph crash archive-all Am 22. März 2024 22:26:50 MEZ schrieb Albert Shih : >Hi, > >Very basic question : 2 days ago I reboot all the cluster. Everything work >fine. But I'm guessing during the shutdown 4 osd was mark as crash > >[WRN] RECENT_CRASH: 4 daemons have recently crashed >osd.381

[ceph-users] Re: cephadm custom crush location hooks

2024-05-03 Thread Eugen Block
And in the tracker you never mentioned to add a symlink, only to add the prefix "/rootfs" to the ceph config. I could have tried that approach first. ;-) Zitat von Eugen Block : Alright, I updated the configs in our production cluster and restarted the OSDs (after removing the manual

[ceph-users] Re: cephadm custom crush location hooks

2024-05-03 Thread Eugen Block
Alright, I updated the configs in our production cluster and restarted the OSDs (after removing the manual mapping from their unit.run files), everything good. @Zac: Would you agree that it makes sense to add this to the docs [1] for cephadm clusters? They only cover the legacy world.

[ceph-users] Re: cephadm custom crush location hooks

2024-05-03 Thread Wyll Ingersoll
Yeah, now that you mention it, I recall figuring that out also at some point. I think I did it originally when I was debugging the problem without the container. From: Eugen Block Sent: Friday, May 3, 2024 8:37 AM To: Wyll Ingersoll Cc: ceph-users@ceph.io

[ceph-users] Re: cephadm custom crush location hooks

2024-05-03 Thread Eugen Block
Hm, I wonder why the symlink is required, the OSDs map / to /rootfs anyway (excerpt of unit.run file): -v /:/rootfs So I removed the symlink and just added /rootfs to the crush location hook: ceph config set osd.0 crush_location_hook /rootfs/usr/local/bin/custom_crush_location After OSD

[ceph-users] Re: cephadm custom crush location hooks

2024-05-03 Thread Wyll Ingersoll
Thank you! From: Eugen Block Sent: Friday, May 3, 2024 6:46 AM To: Wyll Ingersoll Cc: ceph-users@ceph.io Subject: Re: [ceph-users] cephadm custom crush location hooks I found your (open) tracker issue: https://tracker.ceph.com/issues/53562 Your workaround

[ceph-users] Re: 'ceph fs status' no longer works?

2024-05-03 Thread Paul Mezzanini
I've been running into this for quite some time now and if you want a more targeted solution you just need to restart the MDS servers that are not reporting metadata. ceph mds metadata Not sure why they sometimes come up blank. Not sure why there isn't a simple way to tell it to refresh

[ceph-users] Re: cephadm custom crush location hooks

2024-05-03 Thread Eugen Block
I found your (open) tracker issue: https://tracker.ceph.com/issues/53562 Your workaround works great, I tried it in a test cluster successfully. I will adopt it to our production cluster as well. Thanks! Eugen Zitat von Eugen Block : Thank you very much for the quick response! I will take