Can you check what ceph-volume would do if you did it manually? Something like this

host1:~ # cephadm ceph-volume lvm batch --report /dev/vdc /dev/vdd --db-devices /dev/vdb

and don't forget the '--report' flag. One more question, did you properly wipe the previous LV on that NVMe? You should also have some logs available from the deployment attempt, maybe it reveals why the NVMe was not considered.


Zitat von Eric Fahnle <efah...@nubi2go.com>:

Hi Eugen, thanks for the reply.

I've already tried what you wrote in your answer, but still no luck.

The NVMe disk still doesn't have the OSD. Please note I using containers, not standalone OSDs.

Any ideas?

Regards,
Eric

________________________________
Message: 2
Date: Fri, 20 Aug 2021 06:56:59 +0000
From: Eugen Block <ebl...@nde.ag>
Subject: [ceph-users] Re: Missing OSD in SSD after disk failure
To: ceph-users@ceph.io
Message-ID:
        <20210820065659.horde.azw9ev10u5ynqkwjpuyr...@webmail.nde.ag>
Content-Type: text/plain; charset=utf-8; format=flowed; DelSp=Yes

Hi,

this seems to be a reoccuring issue, I had the same just yesterday in
my lab environment running on 15.2.13. If I don't specify other
criteria in the yaml file then I'll end up with standalone OSDs
instead of the desired rocksDB on SSD. Maybe this is still a bug, I
didn't check. My workaround is this spec file:

---snip---
block_db_size: 4G
data_devices:
   size: "20G:"
   rotational: 1
db_devices:
   size: "10G"
   rotational: 0
filter_logic: AND
placement:
   hosts:
   - host4
   - host3
   - host1
   - host2
service_id: default
service_type: osd
---snip---

If you apply the new spec file, then destroy and zap the standalone
OSD I believe the orchestrator should redeploy it correctly, it did in
my case. But as I said, this is just a small lab environment.

Regards,
Eugen


_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io



_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to