I guess it has something to do with your just one pg Oo
But not sure how to move forward.
Did you Revolver your issue?
Greetings
Mehmet
Am 28. Mai 2024 18:12:35 MESZ schrieb Matthew Vernon :
>On 28/05/2024 17:07, Wesley Dillingham wrote:
>> What is the state of your PGs? could you post "ceph
On 28/05/2024 17:07, Wesley Dillingham wrote:
What is the state of your PGs? could you post "ceph -s"
PGs all good:
root@moss-be1001:/# ceph -s
cluster:
id: d7849d66-183c-11ef-b973-bc97e1bb7c18
health: HEALTH_WARN
1 stray daemon(s) not managed by cephadm
services:
What is the state of your PGs? could you post "ceph -s"
I believe (but a bit of an assumption after encountering something similar
myself) that under the hood cephadm is using the "ceph osd safe-to-destroy
osd.X" command and when OSD.X is no longer running and not all PGs are
active+clean (for