I found your (open) tracker issue:
https://tracker.ceph.com/issues/53562
Your workaround works great, I tried it in a test cluster
successfully. I will adopt it to our production cluster as well.
Thanks!
Eugen
Zitat von Eugen Block :
Thank you very much for the quick response! I will take
I've been running into this for quite some time now and if you want a more
targeted solution you just need to restart the MDS servers that are not
reporting metadata.
ceph mds metadata
Not sure why they sometimes come up blank. Not sure why there isn't a simple
way to tell it to refresh wit
Thank you!
From: Eugen Block
Sent: Friday, May 3, 2024 6:46 AM
To: Wyll Ingersoll
Cc: ceph-users@ceph.io
Subject: Re: [ceph-users] cephadm custom crush location hooks
I found your (open) tracker issue:
https://tracker.ceph.com/issues/53562
Your workaround work
Hm, I wonder why the symlink is required, the OSDs map / to /rootfs
anyway (excerpt of unit.run file):
-v /:/rootfs
So I removed the symlink and just added /rootfs to the crush location hook:
ceph config set osd.0 crush_location_hook
/rootfs/usr/local/bin/custom_crush_location
After OSD r
Yeah, now that you mention it, I recall figuring that out also at some point. I
think I did it originally when I was debugging the problem without the
container.
From: Eugen Block
Sent: Friday, May 3, 2024 8:37 AM
To: Wyll Ingersoll
Cc: ceph-users@ceph.io
Su
Alright, I updated the configs in our production cluster and restarted
the OSDs (after removing the manual mapping from their unit.run
files), everything good.
@Zac: Would you agree that it makes sense to add this to the docs [1]
for cephadm clusters? They only cover the legacy world.
Tha
And in the tracker you never mentioned to add a symlink, only to add
the prefix "/rootfs" to the ceph config. I could have tried that
approach first. ;-)
Zitat von Eugen Block :
Alright, I updated the configs in our production cluster and
restarted the OSDs (after removing the manual mapp
Ceph crash archive-all
Am 22. März 2024 22:26:50 MEZ schrieb Albert Shih :
>Hi,
>
>Very basic question : 2 days ago I reboot all the cluster. Everything work
>fine. But I'm guessing during the shutdown 4 osd was mark as crash
>
>[WRN] RECENT_CRASH: 4 daemons have recently crashed
>osd.381 cra
These unfound progress events are from the cephadm module. More details are
in https://tracker.ceph.com/issues/65799
On Fri, Sep 29, 2023 at 7:42 AM Zakhar Kirpichenko wrote:
> Many thanks for the clarification!
>
> /Z
>
> On Fri, 29 Sept 2023 at 16:43, Tyler Stachecki
> wrote:
>
> >
> >
> > O
Hi!
An OSD failed in our 16.2.15 cluster. I prepared it for removal and ran
`ceph orch daemon rm osd.19 --force`. Somehow that didn't work as expected,
so now we still have osd.19 in the crush map:
-10 122.66965 host ceph02
19 1.0 osd.19 do
I ended up manually cleaning up the OSD host, removing stale LVs and DM
entries, and then purging the OSD with `ceph osd purge osd.19`. Looks like
it's gone for good.
/Z
On Sat, 4 May 2024 at 08:29, Zakhar Kirpichenko wrote:
> Hi!
>
> An OSD failed in our 16.2.15 cluster. I prepared it for remo
11 matches
Mail list logo